00:00:00.001 Started by upstream project "autotest-per-patch" build number 132541 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.232 > git --version # 'git version 2.39.2' 00:00:00.232 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.260 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.260 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.327 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.338 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.349 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.349 > git config core.sparsecheckout # timeout=10 00:00:05.361 > git read-tree -mu HEAD # timeout=10 00:00:05.378 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.399 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.400 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.488 [Pipeline] Start of Pipeline 00:00:05.504 [Pipeline] library 00:00:05.506 Loading library shm_lib@master 00:00:05.506 Library shm_lib@master is cached. Copying from home. 00:00:05.526 [Pipeline] node 00:00:05.535 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.536 [Pipeline] { 00:00:05.547 [Pipeline] catchError 00:00:05.549 [Pipeline] { 00:00:05.560 [Pipeline] wrap 00:00:05.569 [Pipeline] { 00:00:05.575 [Pipeline] stage 00:00:05.577 [Pipeline] { (Prologue) 00:00:05.592 [Pipeline] echo 00:00:05.593 Node: VM-host-SM9 00:00:05.599 [Pipeline] cleanWs 00:00:05.606 [WS-CLEANUP] Deleting project workspace... 00:00:05.606 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.612 [WS-CLEANUP] done 00:00:05.841 [Pipeline] setCustomBuildProperty 00:00:05.926 [Pipeline] httpRequest 00:00:06.227 [Pipeline] echo 00:00:06.228 Sorcerer 10.211.164.20 is alive 00:00:06.234 [Pipeline] retry 00:00:06.236 [Pipeline] { 00:00:06.245 [Pipeline] httpRequest 00:00:06.248 HttpMethod: GET 00:00:06.249 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.249 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.253 Response Code: HTTP/1.1 200 OK 00:00:06.253 Success: Status code 200 is in the accepted range: 200,404 00:00:06.254 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.392 [Pipeline] } 00:00:08.406 [Pipeline] // retry 00:00:08.413 [Pipeline] sh 00:00:08.691 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.707 [Pipeline] httpRequest 00:00:09.703 [Pipeline] echo 00:00:09.705 Sorcerer 10.211.164.20 is alive 00:00:09.716 [Pipeline] retry 00:00:09.719 [Pipeline] { 00:00:09.737 [Pipeline] httpRequest 00:00:09.742 HttpMethod: GET 00:00:09.742 URL: http://10.211.164.20/packages/spdk_baa2dd0a5b3e7241072d15d487d7e6ee56dacd80.tar.gz 00:00:09.743 Sending request to url: http://10.211.164.20/packages/spdk_baa2dd0a5b3e7241072d15d487d7e6ee56dacd80.tar.gz 00:00:09.759 Response Code: HTTP/1.1 200 OK 00:00:09.760 Success: Status code 200 is in the accepted range: 200,404 00:00:09.760 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_baa2dd0a5b3e7241072d15d487d7e6ee56dacd80.tar.gz 00:00:51.046 [Pipeline] } 00:00:51.065 [Pipeline] // retry 00:00:51.073 [Pipeline] sh 00:00:51.353 + tar --no-same-owner -xf spdk_baa2dd0a5b3e7241072d15d487d7e6ee56dacd80.tar.gz 00:00:54.647 [Pipeline] sh 00:00:54.925 + git -C spdk log --oneline -n5 00:00:54.925 baa2dd0a5 dif: Set DIF field to 0 explicitly if its check is disabled 00:00:54.925 a91d250fa bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:00:54.925 ff173863b ut/bdev: Remove duplication with many stups among unit test files 00:00:54.925 658cb4c04 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:00:54.925 fc308e3c5 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:00:54.943 [Pipeline] writeFile 00:00:54.959 [Pipeline] sh 00:00:55.239 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.251 [Pipeline] sh 00:00:55.530 + cat autorun-spdk.conf 00:00:55.530 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.530 SPDK_TEST_NVME=1 00:00:55.530 SPDK_TEST_FTL=1 00:00:55.530 SPDK_TEST_ISAL=1 00:00:55.530 SPDK_RUN_ASAN=1 00:00:55.530 SPDK_RUN_UBSAN=1 00:00:55.530 SPDK_TEST_XNVME=1 00:00:55.530 SPDK_TEST_NVME_FDP=1 00:00:55.530 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:55.536 RUN_NIGHTLY=0 00:00:55.538 [Pipeline] } 00:00:55.550 [Pipeline] // stage 00:00:55.564 [Pipeline] stage 00:00:55.567 [Pipeline] { (Run VM) 00:00:55.580 [Pipeline] sh 00:00:55.859 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:55.859 + echo 'Start stage prepare_nvme.sh' 00:00:55.859 Start stage prepare_nvme.sh 00:00:55.859 + [[ -n 1 ]] 00:00:55.859 + disk_prefix=ex1 00:00:55.859 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:55.859 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:55.859 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:55.859 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.859 ++ SPDK_TEST_NVME=1 00:00:55.859 ++ SPDK_TEST_FTL=1 00:00:55.859 ++ SPDK_TEST_ISAL=1 00:00:55.859 ++ SPDK_RUN_ASAN=1 00:00:55.859 ++ SPDK_RUN_UBSAN=1 00:00:55.859 ++ SPDK_TEST_XNVME=1 00:00:55.859 ++ SPDK_TEST_NVME_FDP=1 00:00:55.859 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:55.859 ++ RUN_NIGHTLY=0 00:00:55.859 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:55.859 + nvme_files=() 00:00:55.859 + declare -A nvme_files 00:00:55.859 + backend_dir=/var/lib/libvirt/images/backends 00:00:55.859 + nvme_files['nvme.img']=5G 00:00:55.859 + nvme_files['nvme-cmb.img']=5G 00:00:55.859 + nvme_files['nvme-multi0.img']=4G 00:00:55.859 + nvme_files['nvme-multi1.img']=4G 00:00:55.859 + nvme_files['nvme-multi2.img']=4G 00:00:55.859 + nvme_files['nvme-openstack.img']=8G 00:00:55.859 + nvme_files['nvme-zns.img']=5G 00:00:55.859 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:55.859 + (( SPDK_TEST_FTL == 1 )) 00:00:55.859 + nvme_files["nvme-ftl.img"]=6G 00:00:55.859 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:55.859 + nvme_files["nvme-fdp.img"]=1G 00:00:55.859 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:55.859 + for nvme in "${!nvme_files[@]}" 00:00:55.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:55.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:55.859 + for nvme in "${!nvme_files[@]}" 00:00:55.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:55.859 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:55.859 + for nvme in "${!nvme_files[@]}" 00:00:55.859 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:56.118 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.118 + for nvme in "${!nvme_files[@]}" 00:00:56.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:56.118 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.118 + for nvme in "${!nvme_files[@]}" 00:00:56.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:56.118 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.118 + for nvme in "${!nvme_files[@]}" 00:00:56.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:56.118 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.118 + for nvme in "${!nvme_files[@]}" 00:00:56.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:56.118 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.118 + for nvme in "${!nvme_files[@]}" 00:00:56.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:56.119 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:56.119 + for nvme in "${!nvme_files[@]}" 00:00:56.119 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:56.377 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.377 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:56.377 + echo 'End stage prepare_nvme.sh' 00:00:56.377 End stage prepare_nvme.sh 00:00:56.389 [Pipeline] sh 00:00:56.672 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.672 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:56.931 00:00:56.931 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:56.931 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:56.931 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:56.931 HELP=0 00:00:56.931 DRY_RUN=0 00:00:56.931 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:56.931 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:56.931 NVME_AUTO_CREATE=0 00:00:56.931 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:56.931 NVME_CMB=,,,, 00:00:56.931 NVME_PMR=,,,, 00:00:56.931 NVME_ZNS=,,,, 00:00:56.931 NVME_MS=true,,,, 00:00:56.931 NVME_FDP=,,,on, 00:00:56.931 SPDK_VAGRANT_DISTRO=fedora39 00:00:56.931 SPDK_VAGRANT_VMCPU=10 00:00:56.931 SPDK_VAGRANT_VMRAM=12288 00:00:56.931 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.931 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.931 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.931 SPDK_OPENSTACK_NETWORK=0 00:00:56.931 VAGRANT_PACKAGE_BOX=0 00:00:56.931 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.931 FORCE_DISTRO=true 00:00:56.931 VAGRANT_BOX_VERSION= 00:00:56.931 EXTRA_VAGRANTFILES= 00:00:56.931 NIC_MODEL=e1000 00:00:56.931 00:00:56.931 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:56.931 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:01.115 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.373 ==> default: Creating image (snapshot of base box volume). 00:01:01.373 ==> default: Creating domain with the following settings... 00:01:01.373 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732646732_bcb1fe2f5cc800b26528 00:01:01.373 ==> default: -- Domain type: kvm 00:01:01.373 ==> default: -- Cpus: 10 00:01:01.373 ==> default: -- Feature: acpi 00:01:01.373 ==> default: -- Feature: apic 00:01:01.373 ==> default: -- Feature: pae 00:01:01.373 ==> default: -- Memory: 12288M 00:01:01.373 ==> default: -- Memory Backing: hugepages: 00:01:01.373 ==> default: -- Management MAC: 00:01:01.373 ==> default: -- Loader: 00:01:01.373 ==> default: -- Nvram: 00:01:01.373 ==> default: -- Base box: spdk/fedora39 00:01:01.373 ==> default: -- Storage pool: default 00:01:01.373 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732646732_bcb1fe2f5cc800b26528.img (20G) 00:01:01.373 ==> default: -- Volume Cache: default 00:01:01.373 ==> default: -- Kernel: 00:01:01.373 ==> default: -- Initrd: 00:01:01.373 ==> default: -- Graphics Type: vnc 00:01:01.373 ==> default: -- Graphics Port: -1 00:01:01.373 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.373 ==> default: -- Graphics Password: Not defined 00:01:01.373 ==> default: -- Video Type: cirrus 00:01:01.373 ==> default: -- Video VRAM: 9216 00:01:01.373 ==> default: -- Sound Type: 00:01:01.373 ==> default: -- Keymap: en-us 00:01:01.373 ==> default: -- TPM Path: 00:01:01.373 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.373 ==> default: -- Command line args: 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:01.373 ==> default: -> value=-drive, 00:01:01.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:01.373 ==> default: -> value=-drive, 00:01:01.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:01.373 ==> default: -> value=-drive, 00:01:01.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.373 ==> default: -> value=-drive, 00:01:01.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.373 ==> default: -> value=-drive, 00:01:01.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:01.373 ==> default: -> value=-drive, 00:01:01.373 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:01.373 ==> default: -> value=-device, 00:01:01.373 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.633 ==> default: Creating shared folders metadata... 00:01:01.633 ==> default: Starting domain. 00:01:03.075 ==> default: Waiting for domain to get an IP address... 00:01:21.164 ==> default: Waiting for SSH to become available... 00:01:21.164 ==> default: Configuring and enabling network interfaces... 00:01:24.441 default: SSH address: 192.168.121.19:22 00:01:24.441 default: SSH username: vagrant 00:01:24.441 default: SSH auth method: private key 00:01:26.343 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:34.451 ==> default: Mounting SSHFS shared folder... 00:01:35.383 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:35.383 ==> default: Checking Mount.. 00:01:36.766 ==> default: Folder Successfully Mounted! 00:01:36.766 ==> default: Running provisioner: file... 00:01:37.344 default: ~/.gitconfig => .gitconfig 00:01:37.603 00:01:37.603 SUCCESS! 00:01:37.603 00:01:37.603 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:37.603 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:37.603 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:37.603 00:01:37.612 [Pipeline] } 00:01:37.627 [Pipeline] // stage 00:01:37.636 [Pipeline] dir 00:01:37.636 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:37.638 [Pipeline] { 00:01:37.651 [Pipeline] catchError 00:01:37.652 [Pipeline] { 00:01:37.665 [Pipeline] sh 00:01:37.944 + vagrant ssh-config --host vagrant 00:01:37.944 + sed -ne /^Host/,$p 00:01:37.944 + tee ssh_conf 00:01:42.131 Host vagrant 00:01:42.131 HostName 192.168.121.19 00:01:42.131 User vagrant 00:01:42.131 Port 22 00:01:42.131 UserKnownHostsFile /dev/null 00:01:42.131 StrictHostKeyChecking no 00:01:42.131 PasswordAuthentication no 00:01:42.131 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:42.131 IdentitiesOnly yes 00:01:42.131 LogLevel FATAL 00:01:42.132 ForwardAgent yes 00:01:42.132 ForwardX11 yes 00:01:42.132 00:01:42.146 [Pipeline] withEnv 00:01:42.149 [Pipeline] { 00:01:42.165 [Pipeline] sh 00:01:42.446 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:42.446 source /etc/os-release 00:01:42.446 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.446 # Minimal, systemd-like check. 00:01:42.446 if [[ -e /.dockerenv ]]; then 00:01:42.446 # Clear garbage from the node's name: 00:01:42.446 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.446 # $HOSTNAME is the actual container id 00:01:42.446 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.446 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.446 # We can assume this is a mount from a host where container is running, 00:01:42.446 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.446 container="$(< /etc/hostname) ($agent)" 00:01:42.446 else 00:01:42.446 # Fallback 00:01:42.446 container=$agent 00:01:42.446 fi 00:01:42.446 fi 00:01:42.446 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.446 00:01:42.715 [Pipeline] } 00:01:42.736 [Pipeline] // withEnv 00:01:42.745 [Pipeline] setCustomBuildProperty 00:01:42.761 [Pipeline] stage 00:01:42.763 [Pipeline] { (Tests) 00:01:42.781 [Pipeline] sh 00:01:43.063 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.339 [Pipeline] sh 00:01:43.625 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.641 [Pipeline] timeout 00:01:43.641 Timeout set to expire in 50 min 00:01:43.643 [Pipeline] { 00:01:43.657 [Pipeline] sh 00:01:43.937 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.503 HEAD is now at baa2dd0a5 dif: Set DIF field to 0 explicitly if its check is disabled 00:01:44.524 [Pipeline] sh 00:01:44.804 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:45.078 [Pipeline] sh 00:01:45.358 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.633 [Pipeline] sh 00:01:45.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:45.914 ++ readlink -f spdk_repo 00:01:46.173 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.173 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.173 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.173 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.173 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.173 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.173 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.173 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:46.173 + cd /home/vagrant/spdk_repo 00:01:46.173 + source /etc/os-release 00:01:46.173 ++ NAME='Fedora Linux' 00:01:46.173 ++ VERSION='39 (Cloud Edition)' 00:01:46.173 ++ ID=fedora 00:01:46.173 ++ VERSION_ID=39 00:01:46.173 ++ VERSION_CODENAME= 00:01:46.173 ++ PLATFORM_ID=platform:f39 00:01:46.173 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.173 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.173 ++ LOGO=fedora-logo-icon 00:01:46.173 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.173 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.173 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.173 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.173 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.173 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.173 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.173 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.173 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.173 ++ SUPPORT_END=2024-11-12 00:01:46.173 ++ VARIANT='Cloud Edition' 00:01:46.173 ++ VARIANT_ID=cloud 00:01:46.173 + uname -a 00:01:46.173 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.173 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:46.688 Hugepages 00:01:46.688 node hugesize free / total 00:01:46.688 node0 1048576kB 0 / 0 00:01:46.688 node0 2048kB 0 / 0 00:01:46.688 00:01:46.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.947 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:46.947 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:46.947 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:46.947 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:46.947 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:46.947 + rm -f /tmp/spdk-ld-path 00:01:46.947 + source autorun-spdk.conf 00:01:46.947 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.947 ++ SPDK_TEST_NVME=1 00:01:46.947 ++ SPDK_TEST_FTL=1 00:01:46.947 ++ SPDK_TEST_ISAL=1 00:01:46.947 ++ SPDK_RUN_ASAN=1 00:01:46.947 ++ SPDK_RUN_UBSAN=1 00:01:46.947 ++ SPDK_TEST_XNVME=1 00:01:46.947 ++ SPDK_TEST_NVME_FDP=1 00:01:46.947 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.947 ++ RUN_NIGHTLY=0 00:01:46.947 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.947 + [[ -n '' ]] 00:01:46.947 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:46.947 + for M in /var/spdk/build-*-manifest.txt 00:01:46.947 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:46.947 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.947 + for M in /var/spdk/build-*-manifest.txt 00:01:46.947 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.947 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.947 + for M in /var/spdk/build-*-manifest.txt 00:01:46.947 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.947 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.947 ++ uname 00:01:46.947 + [[ Linux == \L\i\n\u\x ]] 00:01:46.947 + sudo dmesg -T 00:01:46.947 + sudo dmesg --clear 00:01:46.947 + dmesg_pid=5288 00:01:46.947 + sudo dmesg -Tw 00:01:46.947 + [[ Fedora Linux == FreeBSD ]] 00:01:46.947 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.947 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.947 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.947 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.947 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.947 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.947 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.947 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.948 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.948 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.948 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.948 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.948 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.948 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.948 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.207 18:46:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.207 18:46:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.207 18:46:18 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:47.207 18:46:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.207 18:46:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.207 18:46:18 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.207 18:46:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:47.207 18:46:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.207 18:46:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.207 18:46:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.207 18:46:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.207 18:46:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.207 18:46:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.207 18:46:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.207 18:46:18 -- paths/export.sh@5 -- $ export PATH 00:01:47.207 18:46:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.207 18:46:18 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:47.207 18:46:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.207 18:46:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732646778.XXXXXX 00:01:47.207 18:46:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732646778.IAVlwo 00:01:47.207 18:46:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.207 18:46:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.207 18:46:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:47.207 18:46:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:47.207 18:46:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.207 18:46:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.207 18:46:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.207 18:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.207 18:46:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:47.207 18:46:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.207 18:46:18 -- pm/common@17 -- $ local monitor 00:01:47.207 18:46:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.207 18:46:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.207 18:46:18 -- pm/common@25 -- $ sleep 1 00:01:47.207 18:46:18 -- pm/common@21 -- $ date +%s 00:01:47.207 18:46:18 -- pm/common@21 -- $ date +%s 00:01:47.207 18:46:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732646778 00:01:47.207 18:46:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732646778 00:01:47.207 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732646778_collect-cpu-load.pm.log 00:01:47.207 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732646778_collect-vmstat.pm.log 00:01:48.142 18:46:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.142 18:46:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.142 18:46:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.142 18:46:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:48.142 18:46:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.142 Tue Nov 26 06:46:19 PM UTC 2024 00:01:48.142 18:46:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.142 v25.01-pre-254-gbaa2dd0a5 00:01:48.142 18:46:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:48.142 18:46:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:48.142 18:46:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.142 18:46:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.142 18:46:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.142 ************************************ 00:01:48.142 START TEST asan 00:01:48.142 ************************************ 00:01:48.142 using asan 00:01:48.142 18:46:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:48.142 00:01:48.142 real 0m0.000s 00:01:48.142 user 0m0.000s 00:01:48.142 sys 0m0.000s 00:01:48.142 18:46:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.142 18:46:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.142 ************************************ 00:01:48.142 END TEST asan 00:01:48.142 ************************************ 00:01:48.400 18:46:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.400 18:46:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.400 18:46:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.400 18:46:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.400 18:46:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.400 ************************************ 00:01:48.400 START TEST ubsan 00:01:48.400 ************************************ 00:01:48.400 using ubsan 00:01:48.400 18:46:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.400 00:01:48.400 real 0m0.000s 00:01:48.400 user 0m0.000s 00:01:48.400 sys 0m0.000s 00:01:48.400 ************************************ 00:01:48.400 END TEST ubsan 00:01:48.400 18:46:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.400 18:46:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.400 ************************************ 00:01:48.400 18:46:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.400 18:46:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.400 18:46:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.400 18:46:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.400 18:46:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.400 18:46:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.400 18:46:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.400 18:46:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.400 18:46:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:48.400 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:48.400 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.966 Using 'verbs' RDMA provider 00:02:02.568 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:14.764 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:15.022 Creating mk/config.mk...done. 00:02:15.022 Creating mk/cc.flags.mk...done. 00:02:15.022 Type 'make' to build. 00:02:15.022 18:46:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:15.022 18:46:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.022 18:46:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.022 18:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.022 ************************************ 00:02:15.022 START TEST make 00:02:15.022 ************************************ 00:02:15.022 18:46:46 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:15.280 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:15.280 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:15.280 meson setup builddir \ 00:02:15.280 -Dwith-libaio=enabled \ 00:02:15.281 -Dwith-liburing=enabled \ 00:02:15.281 -Dwith-libvfn=disabled \ 00:02:15.281 -Dwith-spdk=disabled \ 00:02:15.281 -Dexamples=false \ 00:02:15.281 -Dtests=false \ 00:02:15.281 -Dtools=false && \ 00:02:15.281 meson compile -C builddir && \ 00:02:15.281 cd -) 00:02:15.281 make[1]: Nothing to be done for 'all'. 00:02:18.659 The Meson build system 00:02:18.659 Version: 1.5.0 00:02:18.659 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:18.659 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:18.659 Build type: native build 00:02:18.659 Project name: xnvme 00:02:18.659 Project version: 0.7.5 00:02:18.659 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.659 C linker for the host machine: cc ld.bfd 2.40-14 00:02:18.659 Host machine cpu family: x86_64 00:02:18.659 Host machine cpu: x86_64 00:02:18.659 Message: host_machine.system: linux 00:02:18.659 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:18.659 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:18.659 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:18.659 Run-time dependency threads found: YES 00:02:18.659 Has header "setupapi.h" : NO 00:02:18.659 Has header "linux/blkzoned.h" : YES 00:02:18.659 Has header "linux/blkzoned.h" : YES (cached) 00:02:18.659 Has header "libaio.h" : YES 00:02:18.659 Library aio found: YES 00:02:18.659 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.659 Run-time dependency liburing found: YES 2.2 00:02:18.659 Dependency libvfn skipped: feature with-libvfn disabled 00:02:18.659 Found CMake: /usr/bin/cmake (3.27.7) 00:02:18.659 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:18.659 Subproject spdk : skipped: feature with-spdk disabled 00:02:18.659 Run-time dependency appleframeworks found: NO (tried framework) 00:02:18.659 Run-time dependency appleframeworks found: NO (tried framework) 00:02:18.659 Library rt found: YES 00:02:18.659 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:18.659 Configuring xnvme_config.h using configuration 00:02:18.659 Configuring xnvme.spec using configuration 00:02:18.659 Run-time dependency bash-completion found: YES 2.11 00:02:18.659 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:18.659 Program cp found: YES (/usr/bin/cp) 00:02:18.659 Build targets in project: 3 00:02:18.659 00:02:18.659 xnvme 0.7.5 00:02:18.659 00:02:18.659 Subprojects 00:02:18.659 spdk : NO Feature 'with-spdk' disabled 00:02:18.659 00:02:18.659 User defined options 00:02:18.659 examples : false 00:02:18.659 tests : false 00:02:18.659 tools : false 00:02:18.659 with-libaio : enabled 00:02:18.659 with-liburing: enabled 00:02:18.659 with-libvfn : disabled 00:02:18.659 with-spdk : disabled 00:02:18.659 00:02:18.659 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.226 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:19.226 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:19.226 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:19.226 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:19.226 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:19.226 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:19.226 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:19.226 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:19.226 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:19.485 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:19.485 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:19.485 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:19.485 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:19.485 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:19.485 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:19.485 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:19.485 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:19.485 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:19.485 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:19.485 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:19.485 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:19.485 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:19.485 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:19.485 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:19.485 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:19.742 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:19.742 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:19.742 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:19.742 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:19.742 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:19.742 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:19.742 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:19.742 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:19.742 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:19.742 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:19.742 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:19.742 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:19.742 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:19.742 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:19.742 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:19.742 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:19.742 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:19.742 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:19.742 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:19.742 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:20.000 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:20.000 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:20.000 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:20.000 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:20.000 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:20.000 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:20.000 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:20.000 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:20.000 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:20.000 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:20.000 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:20.000 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:20.000 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:20.000 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:20.257 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:20.257 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:20.257 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:20.257 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:20.257 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:20.257 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:20.257 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:20.257 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:20.257 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:20.514 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:20.514 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:20.514 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:20.514 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:20.514 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:20.514 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:21.448 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:21.448 [75/76] Linking static target lib/libxnvme.a 00:02:21.448 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:21.448 INFO: autodetecting backend as ninja 00:02:21.448 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:21.448 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:33.704 The Meson build system 00:02:33.704 Version: 1.5.0 00:02:33.704 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:33.704 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:33.704 Build type: native build 00:02:33.704 Program cat found: YES (/usr/bin/cat) 00:02:33.704 Project name: DPDK 00:02:33.704 Project version: 24.03.0 00:02:33.704 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.704 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.704 Host machine cpu family: x86_64 00:02:33.704 Host machine cpu: x86_64 00:02:33.704 Message: ## Building in Developer Mode ## 00:02:33.704 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.704 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:33.704 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.704 Program python3 found: YES (/usr/bin/python3) 00:02:33.704 Program cat found: YES (/usr/bin/cat) 00:02:33.704 Compiler for C supports arguments -march=native: YES 00:02:33.704 Checking for size of "void *" : 8 00:02:33.704 Checking for size of "void *" : 8 (cached) 00:02:33.704 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:33.704 Library m found: YES 00:02:33.704 Library numa found: YES 00:02:33.704 Has header "numaif.h" : YES 00:02:33.704 Library fdt found: NO 00:02:33.704 Library execinfo found: NO 00:02:33.704 Has header "execinfo.h" : YES 00:02:33.704 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.704 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.704 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.704 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.704 Run-time dependency openssl found: YES 3.1.1 00:02:33.704 Run-time dependency libpcap found: YES 1.10.4 00:02:33.704 Has header "pcap.h" with dependency libpcap: YES 00:02:33.704 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.704 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.704 Compiler for C supports arguments -Wformat: YES 00:02:33.704 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.704 Compiler for C supports arguments -Wformat-security: NO 00:02:33.704 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.704 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.704 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.704 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.704 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.704 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.704 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.704 Compiler for C supports arguments -Wundef: YES 00:02:33.704 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.704 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.704 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.704 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.704 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.704 Program objdump found: YES (/usr/bin/objdump) 00:02:33.704 Compiler for C supports arguments -mavx512f: YES 00:02:33.704 Checking if "AVX512 checking" compiles: YES 00:02:33.704 Fetching value of define "__SSE4_2__" : 1 00:02:33.704 Fetching value of define "__AES__" : 1 00:02:33.704 Fetching value of define "__AVX__" : 1 00:02:33.704 Fetching value of define "__AVX2__" : 1 00:02:33.704 Fetching value of define "__AVX512BW__" : (undefined) 00:02:33.704 Fetching value of define "__AVX512CD__" : (undefined) 00:02:33.704 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:33.704 Fetching value of define "__AVX512F__" : (undefined) 00:02:33.704 Fetching value of define "__AVX512VL__" : (undefined) 00:02:33.704 Fetching value of define "__PCLMUL__" : 1 00:02:33.704 Fetching value of define "__RDRND__" : 1 00:02:33.704 Fetching value of define "__RDSEED__" : 1 00:02:33.704 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:33.704 Fetching value of define "__znver1__" : (undefined) 00:02:33.704 Fetching value of define "__znver2__" : (undefined) 00:02:33.704 Fetching value of define "__znver3__" : (undefined) 00:02:33.704 Fetching value of define "__znver4__" : (undefined) 00:02:33.704 Library asan found: YES 00:02:33.704 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.704 Message: lib/log: Defining dependency "log" 00:02:33.704 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.704 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.704 Library rt found: YES 00:02:33.704 Checking for function "getentropy" : NO 00:02:33.704 Message: lib/eal: Defining dependency "eal" 00:02:33.704 Message: lib/ring: Defining dependency "ring" 00:02:33.704 Message: lib/rcu: Defining dependency "rcu" 00:02:33.704 Message: lib/mempool: Defining dependency "mempool" 00:02:33.704 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.704 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.704 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.704 Compiler for C supports arguments -mpclmul: YES 00:02:33.704 Compiler for C supports arguments -maes: YES 00:02:33.704 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.704 Compiler for C supports arguments -mavx512bw: YES 00:02:33.704 Compiler for C supports arguments -mavx512dq: YES 00:02:33.704 Compiler for C supports arguments -mavx512vl: YES 00:02:33.704 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.704 Compiler for C supports arguments -mavx2: YES 00:02:33.704 Compiler for C supports arguments -mavx: YES 00:02:33.704 Message: lib/net: Defining dependency "net" 00:02:33.704 Message: lib/meter: Defining dependency "meter" 00:02:33.704 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.705 Message: lib/pci: Defining dependency "pci" 00:02:33.705 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.705 Message: lib/hash: Defining dependency "hash" 00:02:33.705 Message: lib/timer: Defining dependency "timer" 00:02:33.705 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.705 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.705 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.705 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.705 Message: lib/power: Defining dependency "power" 00:02:33.705 Message: lib/reorder: Defining dependency "reorder" 00:02:33.705 Message: lib/security: Defining dependency "security" 00:02:33.705 Has header "linux/userfaultfd.h" : YES 00:02:33.705 Has header "linux/vduse.h" : YES 00:02:33.705 Message: lib/vhost: Defining dependency "vhost" 00:02:33.705 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.705 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.705 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:33.705 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:33.705 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:33.705 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:33.705 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:33.705 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:33.705 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:33.705 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:33.705 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:33.705 Configuring doxy-api-html.conf using configuration 00:02:33.705 Configuring doxy-api-man.conf using configuration 00:02:33.705 Program mandb found: YES (/usr/bin/mandb) 00:02:33.705 Program sphinx-build found: NO 00:02:33.705 Configuring rte_build_config.h using configuration 00:02:33.705 Message: 00:02:33.705 ================= 00:02:33.705 Applications Enabled 00:02:33.705 ================= 00:02:33.705 00:02:33.705 apps: 00:02:33.705 00:02:33.705 00:02:33.705 Message: 00:02:33.705 ================= 00:02:33.705 Libraries Enabled 00:02:33.705 ================= 00:02:33.705 00:02:33.705 libs: 00:02:33.705 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:33.705 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:33.705 cryptodev, dmadev, power, reorder, security, vhost, 00:02:33.705 00:02:33.705 Message: 00:02:33.705 =============== 00:02:33.705 Drivers Enabled 00:02:33.705 =============== 00:02:33.705 00:02:33.705 common: 00:02:33.705 00:02:33.705 bus: 00:02:33.705 pci, vdev, 00:02:33.705 mempool: 00:02:33.705 ring, 00:02:33.705 dma: 00:02:33.705 00:02:33.705 net: 00:02:33.705 00:02:33.705 crypto: 00:02:33.705 00:02:33.705 compress: 00:02:33.705 00:02:33.705 vdpa: 00:02:33.705 00:02:33.705 00:02:33.705 Message: 00:02:33.705 ================= 00:02:33.705 Content Skipped 00:02:33.705 ================= 00:02:33.705 00:02:33.705 apps: 00:02:33.705 dumpcap: explicitly disabled via build config 00:02:33.705 graph: explicitly disabled via build config 00:02:33.705 pdump: explicitly disabled via build config 00:02:33.705 proc-info: explicitly disabled via build config 00:02:33.705 test-acl: explicitly disabled via build config 00:02:33.705 test-bbdev: explicitly disabled via build config 00:02:33.705 test-cmdline: explicitly disabled via build config 00:02:33.705 test-compress-perf: explicitly disabled via build config 00:02:33.705 test-crypto-perf: explicitly disabled via build config 00:02:33.705 test-dma-perf: explicitly disabled via build config 00:02:33.705 test-eventdev: explicitly disabled via build config 00:02:33.705 test-fib: explicitly disabled via build config 00:02:33.705 test-flow-perf: explicitly disabled via build config 00:02:33.705 test-gpudev: explicitly disabled via build config 00:02:33.705 test-mldev: explicitly disabled via build config 00:02:33.705 test-pipeline: explicitly disabled via build config 00:02:33.705 test-pmd: explicitly disabled via build config 00:02:33.705 test-regex: explicitly disabled via build config 00:02:33.705 test-sad: explicitly disabled via build config 00:02:33.705 test-security-perf: explicitly disabled via build config 00:02:33.705 00:02:33.705 libs: 00:02:33.705 argparse: explicitly disabled via build config 00:02:33.705 metrics: explicitly disabled via build config 00:02:33.705 acl: explicitly disabled via build config 00:02:33.705 bbdev: explicitly disabled via build config 00:02:33.705 bitratestats: explicitly disabled via build config 00:02:33.705 bpf: explicitly disabled via build config 00:02:33.705 cfgfile: explicitly disabled via build config 00:02:33.705 distributor: explicitly disabled via build config 00:02:33.705 efd: explicitly disabled via build config 00:02:33.705 eventdev: explicitly disabled via build config 00:02:33.705 dispatcher: explicitly disabled via build config 00:02:33.705 gpudev: explicitly disabled via build config 00:02:33.705 gro: explicitly disabled via build config 00:02:33.705 gso: explicitly disabled via build config 00:02:33.705 ip_frag: explicitly disabled via build config 00:02:33.705 jobstats: explicitly disabled via build config 00:02:33.705 latencystats: explicitly disabled via build config 00:02:33.705 lpm: explicitly disabled via build config 00:02:33.705 member: explicitly disabled via build config 00:02:33.705 pcapng: explicitly disabled via build config 00:02:33.705 rawdev: explicitly disabled via build config 00:02:33.705 regexdev: explicitly disabled via build config 00:02:33.705 mldev: explicitly disabled via build config 00:02:33.705 rib: explicitly disabled via build config 00:02:33.705 sched: explicitly disabled via build config 00:02:33.705 stack: explicitly disabled via build config 00:02:33.705 ipsec: explicitly disabled via build config 00:02:33.705 pdcp: explicitly disabled via build config 00:02:33.705 fib: explicitly disabled via build config 00:02:33.705 port: explicitly disabled via build config 00:02:33.705 pdump: explicitly disabled via build config 00:02:33.705 table: explicitly disabled via build config 00:02:33.705 pipeline: explicitly disabled via build config 00:02:33.705 graph: explicitly disabled via build config 00:02:33.705 node: explicitly disabled via build config 00:02:33.705 00:02:33.705 drivers: 00:02:33.705 common/cpt: not in enabled drivers build config 00:02:33.705 common/dpaax: not in enabled drivers build config 00:02:33.705 common/iavf: not in enabled drivers build config 00:02:33.705 common/idpf: not in enabled drivers build config 00:02:33.705 common/ionic: not in enabled drivers build config 00:02:33.705 common/mvep: not in enabled drivers build config 00:02:33.705 common/octeontx: not in enabled drivers build config 00:02:33.705 bus/auxiliary: not in enabled drivers build config 00:02:33.705 bus/cdx: not in enabled drivers build config 00:02:33.705 bus/dpaa: not in enabled drivers build config 00:02:33.705 bus/fslmc: not in enabled drivers build config 00:02:33.705 bus/ifpga: not in enabled drivers build config 00:02:33.705 bus/platform: not in enabled drivers build config 00:02:33.705 bus/uacce: not in enabled drivers build config 00:02:33.705 bus/vmbus: not in enabled drivers build config 00:02:33.705 common/cnxk: not in enabled drivers build config 00:02:33.705 common/mlx5: not in enabled drivers build config 00:02:33.705 common/nfp: not in enabled drivers build config 00:02:33.705 common/nitrox: not in enabled drivers build config 00:02:33.705 common/qat: not in enabled drivers build config 00:02:33.705 common/sfc_efx: not in enabled drivers build config 00:02:33.705 mempool/bucket: not in enabled drivers build config 00:02:33.705 mempool/cnxk: not in enabled drivers build config 00:02:33.705 mempool/dpaa: not in enabled drivers build config 00:02:33.705 mempool/dpaa2: not in enabled drivers build config 00:02:33.705 mempool/octeontx: not in enabled drivers build config 00:02:33.705 mempool/stack: not in enabled drivers build config 00:02:33.705 dma/cnxk: not in enabled drivers build config 00:02:33.705 dma/dpaa: not in enabled drivers build config 00:02:33.705 dma/dpaa2: not in enabled drivers build config 00:02:33.705 dma/hisilicon: not in enabled drivers build config 00:02:33.705 dma/idxd: not in enabled drivers build config 00:02:33.705 dma/ioat: not in enabled drivers build config 00:02:33.705 dma/skeleton: not in enabled drivers build config 00:02:33.705 net/af_packet: not in enabled drivers build config 00:02:33.705 net/af_xdp: not in enabled drivers build config 00:02:33.705 net/ark: not in enabled drivers build config 00:02:33.705 net/atlantic: not in enabled drivers build config 00:02:33.705 net/avp: not in enabled drivers build config 00:02:33.705 net/axgbe: not in enabled drivers build config 00:02:33.705 net/bnx2x: not in enabled drivers build config 00:02:33.705 net/bnxt: not in enabled drivers build config 00:02:33.705 net/bonding: not in enabled drivers build config 00:02:33.705 net/cnxk: not in enabled drivers build config 00:02:33.705 net/cpfl: not in enabled drivers build config 00:02:33.705 net/cxgbe: not in enabled drivers build config 00:02:33.705 net/dpaa: not in enabled drivers build config 00:02:33.705 net/dpaa2: not in enabled drivers build config 00:02:33.705 net/e1000: not in enabled drivers build config 00:02:33.705 net/ena: not in enabled drivers build config 00:02:33.705 net/enetc: not in enabled drivers build config 00:02:33.705 net/enetfec: not in enabled drivers build config 00:02:33.705 net/enic: not in enabled drivers build config 00:02:33.705 net/failsafe: not in enabled drivers build config 00:02:33.705 net/fm10k: not in enabled drivers build config 00:02:33.705 net/gve: not in enabled drivers build config 00:02:33.705 net/hinic: not in enabled drivers build config 00:02:33.705 net/hns3: not in enabled drivers build config 00:02:33.705 net/i40e: not in enabled drivers build config 00:02:33.705 net/iavf: not in enabled drivers build config 00:02:33.705 net/ice: not in enabled drivers build config 00:02:33.705 net/idpf: not in enabled drivers build config 00:02:33.705 net/igc: not in enabled drivers build config 00:02:33.705 net/ionic: not in enabled drivers build config 00:02:33.705 net/ipn3ke: not in enabled drivers build config 00:02:33.705 net/ixgbe: not in enabled drivers build config 00:02:33.705 net/mana: not in enabled drivers build config 00:02:33.705 net/memif: not in enabled drivers build config 00:02:33.705 net/mlx4: not in enabled drivers build config 00:02:33.705 net/mlx5: not in enabled drivers build config 00:02:33.706 net/mvneta: not in enabled drivers build config 00:02:33.706 net/mvpp2: not in enabled drivers build config 00:02:33.706 net/netvsc: not in enabled drivers build config 00:02:33.706 net/nfb: not in enabled drivers build config 00:02:33.706 net/nfp: not in enabled drivers build config 00:02:33.706 net/ngbe: not in enabled drivers build config 00:02:33.706 net/null: not in enabled drivers build config 00:02:33.706 net/octeontx: not in enabled drivers build config 00:02:33.706 net/octeon_ep: not in enabled drivers build config 00:02:33.706 net/pcap: not in enabled drivers build config 00:02:33.706 net/pfe: not in enabled drivers build config 00:02:33.706 net/qede: not in enabled drivers build config 00:02:33.706 net/ring: not in enabled drivers build config 00:02:33.706 net/sfc: not in enabled drivers build config 00:02:33.706 net/softnic: not in enabled drivers build config 00:02:33.706 net/tap: not in enabled drivers build config 00:02:33.706 net/thunderx: not in enabled drivers build config 00:02:33.706 net/txgbe: not in enabled drivers build config 00:02:33.706 net/vdev_netvsc: not in enabled drivers build config 00:02:33.706 net/vhost: not in enabled drivers build config 00:02:33.706 net/virtio: not in enabled drivers build config 00:02:33.706 net/vmxnet3: not in enabled drivers build config 00:02:33.706 raw/*: missing internal dependency, "rawdev" 00:02:33.706 crypto/armv8: not in enabled drivers build config 00:02:33.706 crypto/bcmfs: not in enabled drivers build config 00:02:33.706 crypto/caam_jr: not in enabled drivers build config 00:02:33.706 crypto/ccp: not in enabled drivers build config 00:02:33.706 crypto/cnxk: not in enabled drivers build config 00:02:33.706 crypto/dpaa_sec: not in enabled drivers build config 00:02:33.706 crypto/dpaa2_sec: not in enabled drivers build config 00:02:33.706 crypto/ipsec_mb: not in enabled drivers build config 00:02:33.706 crypto/mlx5: not in enabled drivers build config 00:02:33.706 crypto/mvsam: not in enabled drivers build config 00:02:33.706 crypto/nitrox: not in enabled drivers build config 00:02:33.706 crypto/null: not in enabled drivers build config 00:02:33.706 crypto/octeontx: not in enabled drivers build config 00:02:33.706 crypto/openssl: not in enabled drivers build config 00:02:33.706 crypto/scheduler: not in enabled drivers build config 00:02:33.706 crypto/uadk: not in enabled drivers build config 00:02:33.706 crypto/virtio: not in enabled drivers build config 00:02:33.706 compress/isal: not in enabled drivers build config 00:02:33.706 compress/mlx5: not in enabled drivers build config 00:02:33.706 compress/nitrox: not in enabled drivers build config 00:02:33.706 compress/octeontx: not in enabled drivers build config 00:02:33.706 compress/zlib: not in enabled drivers build config 00:02:33.706 regex/*: missing internal dependency, "regexdev" 00:02:33.706 ml/*: missing internal dependency, "mldev" 00:02:33.706 vdpa/ifc: not in enabled drivers build config 00:02:33.706 vdpa/mlx5: not in enabled drivers build config 00:02:33.706 vdpa/nfp: not in enabled drivers build config 00:02:33.706 vdpa/sfc: not in enabled drivers build config 00:02:33.706 event/*: missing internal dependency, "eventdev" 00:02:33.706 baseband/*: missing internal dependency, "bbdev" 00:02:33.706 gpu/*: missing internal dependency, "gpudev" 00:02:33.706 00:02:33.706 00:02:33.706 Build targets in project: 85 00:02:33.706 00:02:33.706 DPDK 24.03.0 00:02:33.706 00:02:33.706 User defined options 00:02:33.706 buildtype : debug 00:02:33.706 default_library : shared 00:02:33.706 libdir : lib 00:02:33.706 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:33.706 b_sanitize : address 00:02:33.706 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:33.706 c_link_args : 00:02:33.706 cpu_instruction_set: native 00:02:33.706 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:33.706 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:33.706 enable_docs : false 00:02:33.706 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:33.706 enable_kmods : false 00:02:33.706 max_lcores : 128 00:02:33.706 tests : false 00:02:33.706 00:02:33.706 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.706 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:33.706 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.706 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.706 [3/268] Linking static target lib/librte_kvargs.a 00:02:33.706 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:33.706 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.706 [6/268] Linking static target lib/librte_log.a 00:02:34.271 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.271 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.271 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.528 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.528 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.528 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:34.528 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.784 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.784 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.041 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.041 [17/268] Linking target lib/librte_log.so.24.1 00:02:35.041 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.041 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.041 [20/268] Linking static target lib/librte_telemetry.a 00:02:35.299 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.556 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:35.556 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.556 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.814 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.814 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.814 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:36.073 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.073 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.331 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.331 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:36.331 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.331 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.590 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.849 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.849 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.107 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.107 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.107 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.365 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.365 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.365 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.365 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.623 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.623 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.187 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.444 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.444 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.703 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.703 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.703 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.703 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.961 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.220 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.220 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.220 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.787 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.045 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.045 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.045 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.045 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.303 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.303 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.303 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.560 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.918 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.918 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:41.484 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.742 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.742 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.742 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.000 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.000 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.000 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.000 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.258 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:42.517 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.517 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.517 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.083 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.083 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.342 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.600 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.600 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.600 [85/268] Linking static target lib/librte_ring.a 00:02:44.166 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.166 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.166 [88/268] Linking static target lib/librte_eal.a 00:02:44.424 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.424 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.424 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.424 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.683 [93/268] Linking static target lib/librte_rcu.a 00:02:44.683 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.683 [95/268] Linking static target lib/librte_mempool.a 00:02:44.683 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.941 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.506 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.506 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.765 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:45.765 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:46.023 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.023 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.023 [104/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.281 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.566 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.566 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.566 [108/268] Linking static target lib/librte_mbuf.a 00:02:46.566 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.566 [110/268] Linking static target lib/librte_net.a 00:02:46.566 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.852 [112/268] Linking static target lib/librte_meter.a 00:02:47.420 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.420 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:47.420 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.420 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:47.678 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.678 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.937 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.937 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.871 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.129 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.388 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.646 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.646 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.646 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.646 [127/268] Linking static target lib/librte_pci.a 00:02:49.905 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.905 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.905 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.163 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.163 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.163 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.163 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.423 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.423 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.423 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.423 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.682 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.682 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.682 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.682 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.682 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.682 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.249 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.508 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.767 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.767 [148/268] Linking static target lib/librte_cmdline.a 00:02:51.767 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:52.025 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.283 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.283 [152/268] Linking static target lib/librte_timer.a 00:02:52.283 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.569 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.569 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.569 [156/268] Linking static target lib/librte_ethdev.a 00:02:52.827 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.086 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:53.086 [159/268] Linking static target lib/librte_hash.a 00:02:53.086 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.344 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.344 [162/268] Linking static target lib/librte_compressdev.a 00:02:53.344 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.601 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.601 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.860 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.117 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.375 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.375 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.375 [170/268] Linking static target lib/librte_dmadev.a 00:02:54.375 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.633 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.633 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.892 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.892 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:55.458 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:55.716 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.716 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:55.716 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:55.716 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:55.974 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.231 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.231 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:56.231 [184/268] Linking static target lib/librte_cryptodev.a 00:02:56.488 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.747 [186/268] Linking static target lib/librte_power.a 00:02:57.005 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.263 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.263 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.521 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.521 [191/268] Linking static target lib/librte_reorder.a 00:02:57.521 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.521 [193/268] Linking static target lib/librte_security.a 00:02:58.455 [194/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.455 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.021 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.021 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:59.021 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.588 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:59.588 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:59.848 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.107 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.107 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:00.107 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.366 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.933 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.191 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.191 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.191 [209/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.191 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.450 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.450 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.450 [213/268] Linking target lib/librte_eal.so.24.1 00:03:01.709 [214/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:01.709 [215/268] Linking target lib/librte_pci.so.24.1 00:03:01.709 [216/268] Linking target lib/librte_ring.so.24.1 00:03:01.709 [217/268] Linking target lib/librte_timer.so.24.1 00:03:01.709 [218/268] Linking target lib/librte_meter.so.24.1 00:03:01.709 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.709 [220/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.983 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.983 [222/268] Linking static target drivers/librte_bus_pci.a 00:03:01.984 [223/268] Linking target lib/librte_dmadev.so.24.1 00:03:01.984 [224/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.984 [225/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:01.984 [226/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.984 [227/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.984 [228/268] Linking static target drivers/librte_bus_vdev.a 00:03:02.246 [229/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.246 [230/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.246 [231/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:02.246 [232/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:02.246 [233/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:02.246 [234/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:02.246 [235/268] Linking target lib/librte_rcu.so.24.1 00:03:02.246 [236/268] Linking target lib/librte_mempool.so.24.1 00:03:02.505 [237/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:02.505 [238/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.763 [239/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:02.763 [240/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.763 [241/268] Linking static target drivers/librte_mempool_ring.a 00:03:02.763 [242/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.763 [243/268] Linking target lib/librte_mbuf.so.24.1 00:03:02.764 [244/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:03.022 [245/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.022 [246/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:03.022 [247/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.328 [248/268] Linking target lib/librte_compressdev.so.24.1 00:03:03.328 [249/268] Linking target lib/librte_reorder.so.24.1 00:03:03.328 [250/268] Linking target lib/librte_cryptodev.so.24.1 00:03:03.328 [251/268] Linking target lib/librte_net.so.24.1 00:03:03.591 [252/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:03.591 [253/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:03.591 [254/268] Linking target lib/librte_hash.so.24.1 00:03:03.591 [255/268] Linking target lib/librte_security.so.24.1 00:03:03.591 [256/268] Linking target lib/librte_cmdline.so.24.1 00:03:03.591 [257/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.591 [258/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.849 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.783 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.349 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.607 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:05.866 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:05.866 [264/268] Linking target lib/librte_power.so.24.1 00:03:12.419 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.419 [266/268] Linking static target lib/librte_vhost.a 00:03:12.986 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.986 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:12.986 INFO: autodetecting backend as ninja 00:03:12.986 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:39.524 CC lib/log/log.o 00:03:39.524 CC lib/ut/ut.o 00:03:39.524 CC lib/log/log_deprecated.o 00:03:39.524 CC lib/log/log_flags.o 00:03:39.524 CC lib/ut_mock/mock.o 00:03:39.524 LIB libspdk_ut.a 00:03:39.524 LIB libspdk_ut_mock.a 00:03:39.524 SO libspdk_ut.so.2.0 00:03:39.524 SO libspdk_ut_mock.so.6.0 00:03:39.524 LIB libspdk_log.a 00:03:39.524 SO libspdk_log.so.7.1 00:03:39.524 SYMLINK libspdk_ut.so 00:03:39.524 SYMLINK libspdk_ut_mock.so 00:03:39.524 SYMLINK libspdk_log.so 00:03:39.524 CC lib/util/base64.o 00:03:39.524 CC lib/util/bit_array.o 00:03:39.524 CXX lib/trace_parser/trace.o 00:03:39.524 CC lib/util/cpuset.o 00:03:39.524 CC lib/util/crc16.o 00:03:39.524 CC lib/util/crc32c.o 00:03:39.524 CC lib/util/crc32.o 00:03:39.524 CC lib/dma/dma.o 00:03:39.524 CC lib/ioat/ioat.o 00:03:39.524 CC lib/vfio_user/host/vfio_user_pci.o 00:03:39.524 CC lib/vfio_user/host/vfio_user.o 00:03:39.524 CC lib/util/crc32_ieee.o 00:03:39.524 CC lib/util/crc64.o 00:03:39.524 CC lib/util/dif.o 00:03:39.524 CC lib/util/fd.o 00:03:39.524 LIB libspdk_dma.a 00:03:39.524 CC lib/util/fd_group.o 00:03:39.524 SO libspdk_dma.so.5.0 00:03:39.524 LIB libspdk_ioat.a 00:03:39.524 CC lib/util/file.o 00:03:39.524 CC lib/util/hexlify.o 00:03:39.524 SO libspdk_ioat.so.7.0 00:03:39.524 CC lib/util/iov.o 00:03:39.524 CC lib/util/math.o 00:03:39.524 SYMLINK libspdk_dma.so 00:03:39.524 CC lib/util/net.o 00:03:39.524 SYMLINK libspdk_ioat.so 00:03:39.524 CC lib/util/pipe.o 00:03:39.524 LIB libspdk_vfio_user.a 00:03:39.524 CC lib/util/strerror_tls.o 00:03:39.524 CC lib/util/string.o 00:03:39.524 SO libspdk_vfio_user.so.5.0 00:03:39.524 SYMLINK libspdk_vfio_user.so 00:03:39.524 CC lib/util/uuid.o 00:03:39.524 CC lib/util/xor.o 00:03:39.524 CC lib/util/zipf.o 00:03:39.524 CC lib/util/md5.o 00:03:39.524 LIB libspdk_util.a 00:03:39.782 SO libspdk_util.so.10.1 00:03:39.782 LIB libspdk_trace_parser.a 00:03:39.782 SYMLINK libspdk_util.so 00:03:39.782 SO libspdk_trace_parser.so.6.0 00:03:40.040 SYMLINK libspdk_trace_parser.so 00:03:40.040 CC lib/rdma_utils/rdma_utils.o 00:03:40.040 CC lib/json/json_parse.o 00:03:40.040 CC lib/json/json_util.o 00:03:40.040 CC lib/json/json_write.o 00:03:40.040 CC lib/idxd/idxd.o 00:03:40.040 CC lib/idxd/idxd_user.o 00:03:40.040 CC lib/idxd/idxd_kernel.o 00:03:40.040 CC lib/vmd/vmd.o 00:03:40.040 CC lib/conf/conf.o 00:03:40.040 CC lib/env_dpdk/env.o 00:03:40.298 CC lib/vmd/led.o 00:03:40.298 CC lib/env_dpdk/memory.o 00:03:40.555 CC lib/env_dpdk/pci.o 00:03:40.555 CC lib/env_dpdk/init.o 00:03:40.555 LIB libspdk_rdma_utils.a 00:03:40.555 LIB libspdk_conf.a 00:03:40.555 LIB libspdk_json.a 00:03:40.555 SO libspdk_conf.so.6.0 00:03:40.555 SO libspdk_rdma_utils.so.1.0 00:03:40.555 SO libspdk_json.so.6.0 00:03:40.555 SYMLINK libspdk_conf.so 00:03:40.555 CC lib/env_dpdk/threads.o 00:03:40.555 CC lib/env_dpdk/pci_ioat.o 00:03:40.555 SYMLINK libspdk_rdma_utils.so 00:03:40.555 CC lib/env_dpdk/pci_virtio.o 00:03:40.555 SYMLINK libspdk_json.so 00:03:40.555 CC lib/env_dpdk/pci_vmd.o 00:03:40.813 CC lib/env_dpdk/pci_idxd.o 00:03:40.813 CC lib/env_dpdk/pci_event.o 00:03:40.813 CC lib/env_dpdk/sigbus_handler.o 00:03:41.070 CC lib/env_dpdk/pci_dpdk.o 00:03:41.070 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.070 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.070 CC lib/rdma_provider/common.o 00:03:41.070 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:41.328 LIB libspdk_idxd.a 00:03:41.328 SO libspdk_idxd.so.12.1 00:03:41.328 LIB libspdk_vmd.a 00:03:41.328 CC lib/jsonrpc/jsonrpc_server.o 00:03:41.328 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:41.328 CC lib/jsonrpc/jsonrpc_client.o 00:03:41.328 SO libspdk_vmd.so.6.0 00:03:41.328 SYMLINK libspdk_idxd.so 00:03:41.328 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:41.611 LIB libspdk_rdma_provider.a 00:03:41.611 SYMLINK libspdk_vmd.so 00:03:41.611 SO libspdk_rdma_provider.so.7.0 00:03:41.611 SYMLINK libspdk_rdma_provider.so 00:03:41.869 LIB libspdk_jsonrpc.a 00:03:41.869 SO libspdk_jsonrpc.so.6.0 00:03:41.869 SYMLINK libspdk_jsonrpc.so 00:03:42.126 CC lib/rpc/rpc.o 00:03:42.386 LIB libspdk_rpc.a 00:03:42.386 SO libspdk_rpc.so.6.0 00:03:42.644 SYMLINK libspdk_rpc.so 00:03:42.902 CC lib/trace/trace.o 00:03:42.902 CC lib/trace/trace_flags.o 00:03:42.902 CC lib/notify/notify.o 00:03:42.902 CC lib/trace/trace_rpc.o 00:03:42.902 CC lib/notify/notify_rpc.o 00:03:42.902 CC lib/keyring/keyring.o 00:03:42.902 CC lib/keyring/keyring_rpc.o 00:03:42.902 LIB libspdk_env_dpdk.a 00:03:43.162 LIB libspdk_notify.a 00:03:43.162 SO libspdk_env_dpdk.so.15.1 00:03:43.162 SO libspdk_notify.so.6.0 00:03:43.162 SYMLINK libspdk_notify.so 00:03:43.162 LIB libspdk_keyring.a 00:03:43.162 SYMLINK libspdk_env_dpdk.so 00:03:43.162 SO libspdk_keyring.so.2.0 00:03:43.162 LIB libspdk_trace.a 00:03:43.421 SYMLINK libspdk_keyring.so 00:03:43.421 SO libspdk_trace.so.11.0 00:03:43.421 SYMLINK libspdk_trace.so 00:03:43.679 CC lib/thread/thread.o 00:03:43.679 CC lib/thread/iobuf.o 00:03:43.679 CC lib/sock/sock.o 00:03:43.679 CC lib/sock/sock_rpc.o 00:03:44.245 LIB libspdk_sock.a 00:03:44.245 SO libspdk_sock.so.10.0 00:03:44.245 SYMLINK libspdk_sock.so 00:03:44.503 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:44.503 CC lib/nvme/nvme_fabric.o 00:03:44.503 CC lib/nvme/nvme_ctrlr.o 00:03:44.503 CC lib/nvme/nvme_ns.o 00:03:44.503 CC lib/nvme/nvme_ns_cmd.o 00:03:44.503 CC lib/nvme/nvme_pcie_common.o 00:03:44.503 CC lib/nvme/nvme_pcie.o 00:03:44.503 CC lib/nvme/nvme_qpair.o 00:03:44.503 CC lib/nvme/nvme.o 00:03:45.436 CC lib/nvme/nvme_quirks.o 00:03:45.436 CC lib/nvme/nvme_transport.o 00:03:45.694 CC lib/nvme/nvme_discovery.o 00:03:45.694 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:45.694 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:45.694 CC lib/nvme/nvme_tcp.o 00:03:45.953 CC lib/nvme/nvme_opal.o 00:03:46.211 CC lib/nvme/nvme_io_msg.o 00:03:46.211 CC lib/nvme/nvme_poll_group.o 00:03:46.470 CC lib/nvme/nvme_zns.o 00:03:46.470 CC lib/nvme/nvme_stubs.o 00:03:46.728 CC lib/nvme/nvme_auth.o 00:03:46.728 CC lib/nvme/nvme_cuse.o 00:03:46.728 LIB libspdk_thread.a 00:03:46.728 SO libspdk_thread.so.11.0 00:03:46.986 SYMLINK libspdk_thread.so 00:03:46.986 CC lib/nvme/nvme_rdma.o 00:03:47.244 CC lib/accel/accel.o 00:03:47.244 CC lib/accel/accel_rpc.o 00:03:47.502 CC lib/accel/accel_sw.o 00:03:47.502 CC lib/blob/blobstore.o 00:03:47.502 CC lib/blob/request.o 00:03:47.762 CC lib/blob/zeroes.o 00:03:47.762 CC lib/blob/blob_bs_dev.o 00:03:48.328 CC lib/virtio/virtio.o 00:03:48.328 CC lib/init/json_config.o 00:03:48.328 CC lib/virtio/virtio_vhost_user.o 00:03:48.328 CC lib/init/subsystem.o 00:03:48.586 CC lib/fsdev/fsdev.o 00:03:48.845 CC lib/fsdev/fsdev_io.o 00:03:48.845 CC lib/init/subsystem_rpc.o 00:03:48.845 CC lib/virtio/virtio_vfio_user.o 00:03:49.102 CC lib/virtio/virtio_pci.o 00:03:49.102 LIB libspdk_accel.a 00:03:49.103 CC lib/init/rpc.o 00:03:49.103 CC lib/fsdev/fsdev_rpc.o 00:03:49.103 SO libspdk_accel.so.16.0 00:03:49.360 SYMLINK libspdk_accel.so 00:03:49.360 LIB libspdk_init.a 00:03:49.619 SO libspdk_init.so.6.0 00:03:49.619 CC lib/bdev/bdev.o 00:03:49.619 CC lib/bdev/bdev_rpc.o 00:03:49.619 CC lib/bdev/bdev_zone.o 00:03:49.619 CC lib/bdev/part.o 00:03:49.619 CC lib/bdev/scsi_nvme.o 00:03:49.619 LIB libspdk_virtio.a 00:03:49.619 SYMLINK libspdk_init.so 00:03:49.619 SO libspdk_virtio.so.7.0 00:03:49.877 SYMLINK libspdk_virtio.so 00:03:49.877 CC lib/event/app.o 00:03:49.877 CC lib/event/reactor.o 00:03:49.877 CC lib/event/log_rpc.o 00:03:49.877 CC lib/event/app_rpc.o 00:03:49.877 LIB libspdk_fsdev.a 00:03:49.877 LIB libspdk_nvme.a 00:03:50.136 SO libspdk_fsdev.so.2.0 00:03:50.136 SYMLINK libspdk_fsdev.so 00:03:50.136 CC lib/event/scheduler_static.o 00:03:50.395 SO libspdk_nvme.so.15.0 00:03:50.653 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:50.928 SYMLINK libspdk_nvme.so 00:03:50.928 LIB libspdk_event.a 00:03:51.193 SO libspdk_event.so.14.0 00:03:51.193 SYMLINK libspdk_event.so 00:03:51.759 LIB libspdk_fuse_dispatcher.a 00:03:51.759 SO libspdk_fuse_dispatcher.so.1.0 00:03:51.759 SYMLINK libspdk_fuse_dispatcher.so 00:03:54.284 LIB libspdk_blob.a 00:03:54.284 SO libspdk_blob.so.12.0 00:03:54.284 SYMLINK libspdk_blob.so 00:03:54.542 LIB libspdk_bdev.a 00:03:54.542 CC lib/blobfs/blobfs.o 00:03:54.542 SO libspdk_bdev.so.17.0 00:03:54.542 CC lib/blobfs/tree.o 00:03:54.542 CC lib/lvol/lvol.o 00:03:54.807 SYMLINK libspdk_bdev.so 00:03:55.073 CC lib/nvmf/ctrlr.o 00:03:55.073 CC lib/nvmf/ctrlr_bdev.o 00:03:55.073 CC lib/nvmf/ctrlr_discovery.o 00:03:55.073 CC lib/nvmf/subsystem.o 00:03:55.073 CC lib/ftl/ftl_core.o 00:03:55.073 CC lib/scsi/dev.o 00:03:55.073 CC lib/ublk/ublk.o 00:03:55.073 CC lib/nbd/nbd.o 00:03:55.330 CC lib/scsi/lun.o 00:03:55.896 CC lib/ftl/ftl_init.o 00:03:55.896 CC lib/scsi/port.o 00:03:55.896 CC lib/scsi/scsi.o 00:03:55.896 CC lib/nbd/nbd_rpc.o 00:03:56.155 CC lib/scsi/scsi_bdev.o 00:03:56.155 CC lib/scsi/scsi_pr.o 00:03:56.155 LIB libspdk_nbd.a 00:03:56.155 CC lib/ftl/ftl_layout.o 00:03:56.155 LIB libspdk_blobfs.a 00:03:56.155 SO libspdk_nbd.so.7.0 00:03:56.155 CC lib/ublk/ublk_rpc.o 00:03:56.417 SO libspdk_blobfs.so.11.0 00:03:56.417 LIB libspdk_lvol.a 00:03:56.417 SYMLINK libspdk_nbd.so 00:03:56.417 CC lib/scsi/scsi_rpc.o 00:03:56.417 CC lib/scsi/task.o 00:03:56.417 SYMLINK libspdk_blobfs.so 00:03:56.417 CC lib/ftl/ftl_debug.o 00:03:56.417 SO libspdk_lvol.so.11.0 00:03:56.675 SYMLINK libspdk_lvol.so 00:03:56.675 CC lib/ftl/ftl_io.o 00:03:56.675 LIB libspdk_ublk.a 00:03:56.675 SO libspdk_ublk.so.3.0 00:03:56.675 CC lib/ftl/ftl_sb.o 00:03:56.939 SYMLINK libspdk_ublk.so 00:03:56.939 CC lib/ftl/ftl_l2p.o 00:03:56.939 CC lib/ftl/ftl_l2p_flat.o 00:03:56.939 CC lib/ftl/ftl_nv_cache.o 00:03:56.939 CC lib/ftl/ftl_band.o 00:03:56.939 CC lib/nvmf/nvmf.o 00:03:57.197 CC lib/ftl/ftl_band_ops.o 00:03:57.197 CC lib/ftl/ftl_writer.o 00:03:57.197 CC lib/ftl/ftl_rq.o 00:03:57.197 LIB libspdk_scsi.a 00:03:57.456 CC lib/ftl/ftl_reloc.o 00:03:57.456 SO libspdk_scsi.so.9.0 00:03:57.456 CC lib/nvmf/nvmf_rpc.o 00:03:57.715 SYMLINK libspdk_scsi.so 00:03:57.715 CC lib/nvmf/transport.o 00:03:57.715 CC lib/nvmf/tcp.o 00:03:57.715 CC lib/nvmf/stubs.o 00:03:57.715 CC lib/nvmf/mdns_server.o 00:03:57.974 CC lib/iscsi/conn.o 00:03:58.285 CC lib/iscsi/init_grp.o 00:03:58.544 CC lib/iscsi/iscsi.o 00:03:58.802 CC lib/nvmf/rdma.o 00:03:58.802 CC lib/ftl/ftl_l2p_cache.o 00:03:58.802 CC lib/ftl/ftl_p2l.o 00:03:59.060 CC lib/ftl/ftl_p2l_log.o 00:03:59.319 CC lib/ftl/mngt/ftl_mngt.o 00:03:59.319 CC lib/iscsi/param.o 00:03:59.319 CC lib/vhost/vhost.o 00:03:59.577 CC lib/iscsi/portal_grp.o 00:03:59.577 CC lib/iscsi/tgt_node.o 00:03:59.835 CC lib/iscsi/iscsi_subsystem.o 00:03:59.835 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:59.835 CC lib/vhost/vhost_rpc.o 00:03:59.835 CC lib/vhost/vhost_scsi.o 00:03:59.835 CC lib/vhost/vhost_blk.o 00:04:00.093 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:00.352 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:00.352 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:00.610 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:00.610 CC lib/vhost/rte_vhost_user.o 00:04:00.610 CC lib/iscsi/iscsi_rpc.o 00:04:00.868 CC lib/nvmf/auth.o 00:04:00.868 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:01.126 CC lib/iscsi/task.o 00:04:01.126 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:01.126 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:01.384 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:01.384 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:01.384 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:01.384 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:01.384 CC lib/ftl/utils/ftl_conf.o 00:04:01.384 LIB libspdk_iscsi.a 00:04:01.642 SO libspdk_iscsi.so.8.0 00:04:01.642 CC lib/ftl/utils/ftl_md.o 00:04:01.642 CC lib/ftl/utils/ftl_mempool.o 00:04:01.901 CC lib/ftl/utils/ftl_bitmap.o 00:04:01.901 CC lib/ftl/utils/ftl_property.o 00:04:01.901 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:01.901 SYMLINK libspdk_iscsi.so 00:04:01.901 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.159 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.159 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.159 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.159 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.159 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.418 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:02.418 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:02.418 LIB libspdk_vhost.a 00:04:02.418 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:02.418 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:02.418 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:02.676 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:02.676 SO libspdk_vhost.so.8.0 00:04:02.676 CC lib/ftl/base/ftl_base_dev.o 00:04:02.676 CC lib/ftl/base/ftl_base_bdev.o 00:04:02.676 SYMLINK libspdk_vhost.so 00:04:02.676 CC lib/ftl/ftl_trace.o 00:04:03.242 LIB libspdk_nvmf.a 00:04:03.242 LIB libspdk_ftl.a 00:04:03.242 SO libspdk_nvmf.so.20.0 00:04:03.809 SO libspdk_ftl.so.9.0 00:04:03.809 SYMLINK libspdk_nvmf.so 00:04:04.071 SYMLINK libspdk_ftl.so 00:04:04.694 CC module/env_dpdk/env_dpdk_rpc.o 00:04:04.694 CC module/keyring/file/keyring.o 00:04:04.694 CC module/fsdev/aio/fsdev_aio.o 00:04:04.694 CC module/accel/error/accel_error.o 00:04:04.694 CC module/keyring/linux/keyring.o 00:04:04.694 CC module/blob/bdev/blob_bdev.o 00:04:04.694 CC module/accel/dsa/accel_dsa.o 00:04:04.694 CC module/accel/ioat/accel_ioat.o 00:04:04.694 CC module/sock/posix/posix.o 00:04:04.694 LIB libspdk_env_dpdk_rpc.a 00:04:04.694 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:04.694 SO libspdk_env_dpdk_rpc.so.6.0 00:04:04.971 SYMLINK libspdk_env_dpdk_rpc.so 00:04:04.971 CC module/keyring/linux/keyring_rpc.o 00:04:05.230 CC module/keyring/file/keyring_rpc.o 00:04:05.230 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.230 LIB libspdk_keyring_linux.a 00:04:05.230 CC module/accel/error/accel_error_rpc.o 00:04:05.488 LIB libspdk_blob_bdev.a 00:04:05.488 LIB libspdk_scheduler_dynamic.a 00:04:05.488 SO libspdk_keyring_linux.so.1.0 00:04:05.488 LIB libspdk_accel_ioat.a 00:04:05.488 SO libspdk_blob_bdev.so.12.0 00:04:05.488 SO libspdk_scheduler_dynamic.so.4.0 00:04:05.488 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.488 SO libspdk_accel_ioat.so.6.0 00:04:05.746 LIB libspdk_keyring_file.a 00:04:05.746 SYMLINK libspdk_keyring_linux.so 00:04:05.746 SYMLINK libspdk_blob_bdev.so 00:04:05.746 LIB libspdk_accel_error.a 00:04:05.746 SO libspdk_keyring_file.so.2.0 00:04:05.746 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.746 SO libspdk_accel_error.so.2.0 00:04:05.746 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:05.746 SYMLINK libspdk_accel_ioat.so 00:04:05.746 SYMLINK libspdk_keyring_file.so 00:04:06.005 CC module/fsdev/aio/linux_aio_mgr.o 00:04:06.005 LIB libspdk_accel_dsa.a 00:04:06.005 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:06.005 SYMLINK libspdk_accel_error.so 00:04:06.005 SO libspdk_accel_dsa.so.5.0 00:04:06.263 CC module/scheduler/gscheduler/gscheduler.o 00:04:06.263 SYMLINK libspdk_accel_dsa.so 00:04:06.263 LIB libspdk_scheduler_dpdk_governor.a 00:04:06.521 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:06.521 CC module/bdev/delay/vbdev_delay.o 00:04:06.521 CC module/accel/iaa/accel_iaa.o 00:04:06.521 LIB libspdk_fsdev_aio.a 00:04:06.521 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:06.521 CC module/accel/iaa/accel_iaa_rpc.o 00:04:06.521 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.521 CC module/bdev/error/vbdev_error.o 00:04:06.521 LIB libspdk_scheduler_gscheduler.a 00:04:06.521 CC module/bdev/gpt/gpt.o 00:04:06.521 SO libspdk_fsdev_aio.so.1.0 00:04:06.521 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.779 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.779 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.779 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.779 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:06.779 SYMLINK libspdk_fsdev_aio.so 00:04:06.779 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:06.779 LIB libspdk_sock_posix.a 00:04:06.779 LIB libspdk_accel_iaa.a 00:04:07.037 CC module/bdev/gpt/vbdev_gpt.o 00:04:07.037 SO libspdk_sock_posix.so.6.0 00:04:07.037 SO libspdk_accel_iaa.so.3.0 00:04:07.037 SYMLINK libspdk_accel_iaa.so 00:04:07.037 SYMLINK libspdk_sock_posix.so 00:04:07.037 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.295 LIB libspdk_blobfs_bdev.a 00:04:07.295 CC module/bdev/malloc/bdev_malloc.o 00:04:07.295 LIB libspdk_bdev_delay.a 00:04:07.295 SO libspdk_blobfs_bdev.so.6.0 00:04:07.295 SO libspdk_bdev_delay.so.6.0 00:04:07.295 SYMLINK libspdk_blobfs_bdev.so 00:04:07.554 CC module/bdev/null/bdev_null.o 00:04:07.554 SYMLINK libspdk_bdev_delay.so 00:04:07.554 LIB libspdk_bdev_error.a 00:04:07.554 CC module/bdev/nvme/bdev_nvme.o 00:04:07.554 SO libspdk_bdev_error.so.6.0 00:04:07.554 LIB libspdk_bdev_gpt.a 00:04:07.554 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.554 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:07.554 SYMLINK libspdk_bdev_error.so 00:04:07.554 SO libspdk_bdev_gpt.so.6.0 00:04:07.812 CC module/bdev/null/bdev_null_rpc.o 00:04:07.812 SYMLINK libspdk_bdev_gpt.so 00:04:07.812 CC module/bdev/raid/bdev_raid.o 00:04:07.812 CC module/bdev/split/vbdev_split.o 00:04:07.812 LIB libspdk_bdev_lvol.a 00:04:08.071 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.071 SO libspdk_bdev_lvol.so.6.0 00:04:08.071 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.071 LIB libspdk_bdev_null.a 00:04:08.071 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.071 SO libspdk_bdev_null.so.6.0 00:04:08.071 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:08.071 SYMLINK libspdk_bdev_lvol.so 00:04:08.071 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:08.330 SYMLINK libspdk_bdev_null.so 00:04:08.330 CC module/bdev/nvme/nvme_rpc.o 00:04:08.330 LIB libspdk_bdev_passthru.a 00:04:08.330 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.330 SO libspdk_bdev_passthru.so.6.0 00:04:08.330 LIB libspdk_bdev_split.a 00:04:08.330 LIB libspdk_bdev_malloc.a 00:04:08.330 SO libspdk_bdev_split.so.6.0 00:04:08.590 SO libspdk_bdev_malloc.so.6.0 00:04:08.590 SYMLINK libspdk_bdev_passthru.so 00:04:08.590 CC module/bdev/nvme/vbdev_opal.o 00:04:08.590 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.590 SYMLINK libspdk_bdev_split.so 00:04:08.590 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.590 SYMLINK libspdk_bdev_malloc.so 00:04:08.590 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.849 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.849 CC module/bdev/xnvme/bdev_xnvme.o 00:04:08.849 LIB libspdk_bdev_zone_block.a 00:04:08.849 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:08.849 SO libspdk_bdev_zone_block.so.6.0 00:04:09.107 SYMLINK libspdk_bdev_zone_block.so 00:04:09.107 CC module/bdev/raid/raid0.o 00:04:09.365 CC module/bdev/aio/bdev_aio.o 00:04:09.365 CC module/bdev/aio/bdev_aio_rpc.o 00:04:09.365 CC module/bdev/raid/raid1.o 00:04:09.365 CC module/bdev/ftl/bdev_ftl.o 00:04:09.365 LIB libspdk_bdev_xnvme.a 00:04:09.623 CC module/bdev/iscsi/bdev_iscsi.o 00:04:09.623 SO libspdk_bdev_xnvme.so.3.0 00:04:09.623 CC module/bdev/raid/concat.o 00:04:09.623 SYMLINK libspdk_bdev_xnvme.so 00:04:09.623 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:09.882 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.882 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:09.882 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.141 LIB libspdk_bdev_aio.a 00:04:10.141 SO libspdk_bdev_aio.so.6.0 00:04:10.141 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.141 LIB libspdk_bdev_ftl.a 00:04:10.141 SYMLINK libspdk_bdev_aio.so 00:04:10.141 SO libspdk_bdev_ftl.so.6.0 00:04:10.141 LIB libspdk_bdev_raid.a 00:04:10.426 LIB libspdk_bdev_iscsi.a 00:04:10.426 SO libspdk_bdev_raid.so.6.0 00:04:10.426 SYMLINK libspdk_bdev_ftl.so 00:04:10.426 SO libspdk_bdev_iscsi.so.6.0 00:04:10.426 SYMLINK libspdk_bdev_raid.so 00:04:10.426 SYMLINK libspdk_bdev_iscsi.so 00:04:11.018 LIB libspdk_bdev_virtio.a 00:04:11.018 SO libspdk_bdev_virtio.so.6.0 00:04:11.276 SYMLINK libspdk_bdev_virtio.so 00:04:12.652 LIB libspdk_bdev_nvme.a 00:04:12.652 SO libspdk_bdev_nvme.so.7.1 00:04:12.652 SYMLINK libspdk_bdev_nvme.so 00:04:13.218 CC module/event/subsystems/vmd/vmd.o 00:04:13.218 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:13.218 CC module/event/subsystems/fsdev/fsdev.o 00:04:13.218 CC module/event/subsystems/sock/sock.o 00:04:13.218 CC module/event/subsystems/iobuf/iobuf.o 00:04:13.218 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:13.218 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:13.218 CC module/event/subsystems/keyring/keyring.o 00:04:13.218 CC module/event/subsystems/scheduler/scheduler.o 00:04:13.477 LIB libspdk_event_sock.a 00:04:13.477 LIB libspdk_event_keyring.a 00:04:13.477 LIB libspdk_event_fsdev.a 00:04:13.477 LIB libspdk_event_vhost_blk.a 00:04:13.477 LIB libspdk_event_vmd.a 00:04:13.477 SO libspdk_event_sock.so.5.0 00:04:13.477 SO libspdk_event_keyring.so.1.0 00:04:13.477 SO libspdk_event_fsdev.so.1.0 00:04:13.477 SO libspdk_event_vhost_blk.so.3.0 00:04:13.736 SO libspdk_event_vmd.so.6.0 00:04:13.736 LIB libspdk_event_scheduler.a 00:04:13.736 LIB libspdk_event_iobuf.a 00:04:13.736 SO libspdk_event_scheduler.so.4.0 00:04:13.736 SYMLINK libspdk_event_sock.so 00:04:13.736 SYMLINK libspdk_event_keyring.so 00:04:13.736 SYMLINK libspdk_event_fsdev.so 00:04:13.736 SO libspdk_event_iobuf.so.3.0 00:04:13.736 SYMLINK libspdk_event_vhost_blk.so 00:04:13.736 SYMLINK libspdk_event_vmd.so 00:04:13.736 SYMLINK libspdk_event_scheduler.so 00:04:13.736 SYMLINK libspdk_event_iobuf.so 00:04:13.993 CC module/event/subsystems/accel/accel.o 00:04:14.252 LIB libspdk_event_accel.a 00:04:14.252 SO libspdk_event_accel.so.6.0 00:04:14.252 SYMLINK libspdk_event_accel.so 00:04:14.511 CC module/event/subsystems/bdev/bdev.o 00:04:14.770 LIB libspdk_event_bdev.a 00:04:14.770 SO libspdk_event_bdev.so.6.0 00:04:14.770 SYMLINK libspdk_event_bdev.so 00:04:15.027 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:15.027 CC module/event/subsystems/ublk/ublk.o 00:04:15.027 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:15.027 CC module/event/subsystems/nbd/nbd.o 00:04:15.028 CC module/event/subsystems/scsi/scsi.o 00:04:15.285 LIB libspdk_event_scsi.a 00:04:15.285 LIB libspdk_event_nbd.a 00:04:15.285 LIB libspdk_event_ublk.a 00:04:15.285 SO libspdk_event_scsi.so.6.0 00:04:15.285 SO libspdk_event_nbd.so.6.0 00:04:15.285 SO libspdk_event_ublk.so.3.0 00:04:15.285 LIB libspdk_event_nvmf.a 00:04:15.285 SYMLINK libspdk_event_nbd.so 00:04:15.285 SYMLINK libspdk_event_scsi.so 00:04:15.286 SYMLINK libspdk_event_ublk.so 00:04:15.286 SO libspdk_event_nvmf.so.6.0 00:04:15.544 SYMLINK libspdk_event_nvmf.so 00:04:15.544 CC module/event/subsystems/iscsi/iscsi.o 00:04:15.544 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:15.802 LIB libspdk_event_iscsi.a 00:04:15.802 SO libspdk_event_iscsi.so.6.0 00:04:15.802 LIB libspdk_event_vhost_scsi.a 00:04:15.802 SO libspdk_event_vhost_scsi.so.3.0 00:04:15.802 SYMLINK libspdk_event_iscsi.so 00:04:16.060 SYMLINK libspdk_event_vhost_scsi.so 00:04:16.060 SO libspdk.so.6.0 00:04:16.060 SYMLINK libspdk.so 00:04:16.318 CXX app/trace/trace.o 00:04:16.318 CC app/trace_record/trace_record.o 00:04:16.318 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.318 CC app/nvmf_tgt/nvmf_main.o 00:04:16.318 CC app/iscsi_tgt/iscsi_tgt.o 00:04:16.318 CC examples/ioat/perf/perf.o 00:04:16.575 CC test/thread/poller_perf/poller_perf.o 00:04:16.575 CC examples/util/zipf/zipf.o 00:04:16.575 CC app/spdk_tgt/spdk_tgt.o 00:04:16.575 CC test/dma/test_dma/test_dma.o 00:04:16.575 LINK interrupt_tgt 00:04:16.575 LINK spdk_trace_record 00:04:16.575 LINK nvmf_tgt 00:04:16.833 LINK poller_perf 00:04:16.833 LINK zipf 00:04:16.833 LINK iscsi_tgt 00:04:16.833 LINK spdk_tgt 00:04:16.833 LINK ioat_perf 00:04:16.833 CC app/spdk_lspci/spdk_lspci.o 00:04:17.091 CC app/spdk_nvme_perf/perf.o 00:04:17.091 LINK spdk_trace 00:04:17.091 CC app/spdk_nvme_identify/identify.o 00:04:17.091 TEST_HEADER include/spdk/accel.h 00:04:17.091 TEST_HEADER include/spdk/accel_module.h 00:04:17.091 TEST_HEADER include/spdk/assert.h 00:04:17.091 TEST_HEADER include/spdk/barrier.h 00:04:17.091 TEST_HEADER include/spdk/base64.h 00:04:17.091 TEST_HEADER include/spdk/bdev.h 00:04:17.091 TEST_HEADER include/spdk/bdev_module.h 00:04:17.091 LINK spdk_lspci 00:04:17.091 TEST_HEADER include/spdk/bdev_zone.h 00:04:17.091 TEST_HEADER include/spdk/bit_array.h 00:04:17.091 TEST_HEADER include/spdk/bit_pool.h 00:04:17.091 TEST_HEADER include/spdk/blob_bdev.h 00:04:17.091 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:17.091 TEST_HEADER include/spdk/blobfs.h 00:04:17.091 TEST_HEADER include/spdk/blob.h 00:04:17.091 TEST_HEADER include/spdk/conf.h 00:04:17.091 TEST_HEADER include/spdk/config.h 00:04:17.091 TEST_HEADER include/spdk/cpuset.h 00:04:17.091 CC test/app/bdev_svc/bdev_svc.o 00:04:17.091 TEST_HEADER include/spdk/crc16.h 00:04:17.091 TEST_HEADER include/spdk/crc32.h 00:04:17.091 CC examples/ioat/verify/verify.o 00:04:17.091 TEST_HEADER include/spdk/crc64.h 00:04:17.091 TEST_HEADER include/spdk/dif.h 00:04:17.091 TEST_HEADER include/spdk/dma.h 00:04:17.091 TEST_HEADER include/spdk/endian.h 00:04:17.091 TEST_HEADER include/spdk/env_dpdk.h 00:04:17.091 TEST_HEADER include/spdk/env.h 00:04:17.091 TEST_HEADER include/spdk/event.h 00:04:17.091 TEST_HEADER include/spdk/fd_group.h 00:04:17.091 TEST_HEADER include/spdk/fd.h 00:04:17.091 TEST_HEADER include/spdk/file.h 00:04:17.091 TEST_HEADER include/spdk/fsdev.h 00:04:17.091 TEST_HEADER include/spdk/fsdev_module.h 00:04:17.091 TEST_HEADER include/spdk/ftl.h 00:04:17.091 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:17.091 TEST_HEADER include/spdk/gpt_spec.h 00:04:17.091 TEST_HEADER include/spdk/hexlify.h 00:04:17.091 TEST_HEADER include/spdk/histogram_data.h 00:04:17.091 TEST_HEADER include/spdk/idxd.h 00:04:17.091 TEST_HEADER include/spdk/idxd_spec.h 00:04:17.091 TEST_HEADER include/spdk/init.h 00:04:17.349 TEST_HEADER include/spdk/ioat.h 00:04:17.349 TEST_HEADER include/spdk/ioat_spec.h 00:04:17.349 TEST_HEADER include/spdk/iscsi_spec.h 00:04:17.349 TEST_HEADER include/spdk/json.h 00:04:17.349 TEST_HEADER include/spdk/jsonrpc.h 00:04:17.349 TEST_HEADER include/spdk/keyring.h 00:04:17.349 TEST_HEADER include/spdk/keyring_module.h 00:04:17.349 TEST_HEADER include/spdk/likely.h 00:04:17.349 TEST_HEADER include/spdk/log.h 00:04:17.349 TEST_HEADER include/spdk/lvol.h 00:04:17.349 TEST_HEADER include/spdk/md5.h 00:04:17.349 TEST_HEADER include/spdk/memory.h 00:04:17.349 TEST_HEADER include/spdk/mmio.h 00:04:17.349 TEST_HEADER include/spdk/nbd.h 00:04:17.349 TEST_HEADER include/spdk/net.h 00:04:17.349 TEST_HEADER include/spdk/notify.h 00:04:17.349 TEST_HEADER include/spdk/nvme.h 00:04:17.349 TEST_HEADER include/spdk/nvme_intel.h 00:04:17.349 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:17.349 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:17.349 TEST_HEADER include/spdk/nvme_spec.h 00:04:17.349 TEST_HEADER include/spdk/nvme_zns.h 00:04:17.349 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:17.349 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:17.349 TEST_HEADER include/spdk/nvmf.h 00:04:17.349 TEST_HEADER include/spdk/nvmf_spec.h 00:04:17.349 TEST_HEADER include/spdk/nvmf_transport.h 00:04:17.349 TEST_HEADER include/spdk/opal.h 00:04:17.349 TEST_HEADER include/spdk/opal_spec.h 00:04:17.349 TEST_HEADER include/spdk/pci_ids.h 00:04:17.349 TEST_HEADER include/spdk/pipe.h 00:04:17.349 TEST_HEADER include/spdk/queue.h 00:04:17.349 TEST_HEADER include/spdk/reduce.h 00:04:17.349 TEST_HEADER include/spdk/rpc.h 00:04:17.349 TEST_HEADER include/spdk/scheduler.h 00:04:17.349 TEST_HEADER include/spdk/scsi.h 00:04:17.349 TEST_HEADER include/spdk/scsi_spec.h 00:04:17.349 TEST_HEADER include/spdk/sock.h 00:04:17.349 TEST_HEADER include/spdk/stdinc.h 00:04:17.349 TEST_HEADER include/spdk/string.h 00:04:17.349 TEST_HEADER include/spdk/thread.h 00:04:17.349 TEST_HEADER include/spdk/trace.h 00:04:17.349 TEST_HEADER include/spdk/trace_parser.h 00:04:17.349 TEST_HEADER include/spdk/tree.h 00:04:17.349 TEST_HEADER include/spdk/ublk.h 00:04:17.349 TEST_HEADER include/spdk/util.h 00:04:17.349 CC test/event/event_perf/event_perf.o 00:04:17.349 TEST_HEADER include/spdk/uuid.h 00:04:17.349 TEST_HEADER include/spdk/version.h 00:04:17.349 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:17.349 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:17.349 TEST_HEADER include/spdk/vhost.h 00:04:17.349 TEST_HEADER include/spdk/vmd.h 00:04:17.349 TEST_HEADER include/spdk/xor.h 00:04:17.349 TEST_HEADER include/spdk/zipf.h 00:04:17.349 CXX test/cpp_headers/accel.o 00:04:17.349 CC test/env/vtophys/vtophys.o 00:04:17.349 LINK bdev_svc 00:04:17.349 LINK test_dma 00:04:17.349 LINK verify 00:04:17.607 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.607 CC test/env/mem_callbacks/mem_callbacks.o 00:04:17.607 CXX test/cpp_headers/accel_module.o 00:04:17.607 LINK event_perf 00:04:17.864 LINK vtophys 00:04:17.864 LINK env_dpdk_post_init 00:04:17.864 CXX test/cpp_headers/assert.o 00:04:17.864 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:17.865 CC test/event/reactor/reactor.o 00:04:17.865 CC test/event/reactor_perf/reactor_perf.o 00:04:18.126 CC examples/thread/thread/thread_ex.o 00:04:18.126 CC test/event/app_repeat/app_repeat.o 00:04:18.126 LINK reactor 00:04:18.126 CXX test/cpp_headers/barrier.o 00:04:18.126 LINK spdk_nvme_identify 00:04:18.126 LINK reactor_perf 00:04:18.388 CC examples/sock/hello_world/hello_sock.o 00:04:18.388 LINK thread 00:04:18.388 CXX test/cpp_headers/base64.o 00:04:18.388 LINK app_repeat 00:04:18.388 LINK spdk_nvme_perf 00:04:18.388 LINK nvme_fuzz 00:04:18.645 CC examples/vmd/led/led.o 00:04:18.645 CC examples/vmd/lsvmd/lsvmd.o 00:04:18.645 CXX test/cpp_headers/bdev.o 00:04:18.645 LINK mem_callbacks 00:04:18.645 CC app/spdk_nvme_discover/discovery_aer.o 00:04:18.645 CC test/event/scheduler/scheduler.o 00:04:18.645 LINK lsvmd 00:04:18.902 LINK hello_sock 00:04:18.902 CXX test/cpp_headers/bdev_module.o 00:04:18.902 LINK led 00:04:18.902 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.902 LINK spdk_nvme_discover 00:04:18.902 CC test/env/memory/memory_ut.o 00:04:18.902 CC examples/idxd/perf/perf.o 00:04:19.160 LINK scheduler 00:04:19.160 CC test/rpc_client/rpc_client_test.o 00:04:19.160 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:19.160 CXX test/cpp_headers/bdev_zone.o 00:04:19.160 CC app/spdk_top/spdk_top.o 00:04:19.160 CC test/accel/dif/dif.o 00:04:19.418 CC test/blobfs/mkfs/mkfs.o 00:04:19.418 LINK rpc_client_test 00:04:19.418 LINK idxd_perf 00:04:19.418 CC test/app/histogram_perf/histogram_perf.o 00:04:19.418 LINK hello_fsdev 00:04:19.418 CXX test/cpp_headers/bit_array.o 00:04:19.676 LINK mkfs 00:04:19.676 LINK histogram_perf 00:04:19.676 CXX test/cpp_headers/bit_pool.o 00:04:19.934 CC test/nvme/aer/aer.o 00:04:19.934 CC test/lvol/esnap/esnap.o 00:04:20.193 CXX test/cpp_headers/blob_bdev.o 00:04:20.193 CC examples/accel/perf/accel_perf.o 00:04:20.193 CC test/nvme/reset/reset.o 00:04:20.193 CC test/nvme/sgl/sgl.o 00:04:20.452 CXX test/cpp_headers/blobfs_bdev.o 00:04:20.452 LINK aer 00:04:20.710 LINK memory_ut 00:04:20.710 LINK spdk_top 00:04:20.710 LINK reset 00:04:20.710 CXX test/cpp_headers/blobfs.o 00:04:20.710 LINK sgl 00:04:20.969 CC test/nvme/e2edp/nvme_dp.o 00:04:20.969 LINK dif 00:04:20.969 CXX test/cpp_headers/blob.o 00:04:20.969 CC app/vhost/vhost.o 00:04:20.969 CXX test/cpp_headers/conf.o 00:04:21.227 CXX test/cpp_headers/config.o 00:04:21.227 CC test/env/pci/pci_ut.o 00:04:21.227 CXX test/cpp_headers/cpuset.o 00:04:21.227 LINK accel_perf 00:04:21.227 LINK vhost 00:04:21.227 LINK nvme_dp 00:04:21.227 CC test/nvme/overhead/overhead.o 00:04:21.485 CC test/nvme/err_injection/err_injection.o 00:04:21.485 CC test/nvme/startup/startup.o 00:04:21.485 CXX test/cpp_headers/crc16.o 00:04:21.744 LINK overhead 00:04:21.744 CXX test/cpp_headers/crc32.o 00:04:21.744 LINK err_injection 00:04:21.744 CC examples/blob/hello_world/hello_blob.o 00:04:21.744 LINK startup 00:04:21.744 CC test/bdev/bdevio/bdevio.o 00:04:22.002 CC app/spdk_dd/spdk_dd.o 00:04:22.002 LINK pci_ut 00:04:22.002 CXX test/cpp_headers/crc64.o 00:04:22.002 CXX test/cpp_headers/dif.o 00:04:22.002 CC examples/blob/cli/blobcli.o 00:04:22.260 LINK hello_blob 00:04:22.260 CC test/nvme/reserve/reserve.o 00:04:22.260 LINK iscsi_fuzz 00:04:22.260 CXX test/cpp_headers/dma.o 00:04:22.518 CC test/nvme/simple_copy/simple_copy.o 00:04:22.518 LINK bdevio 00:04:22.518 CC test/nvme/connect_stress/connect_stress.o 00:04:22.518 LINK reserve 00:04:22.776 CXX test/cpp_headers/endian.o 00:04:22.776 CC test/nvme/boot_partition/boot_partition.o 00:04:22.776 LINK spdk_dd 00:04:22.776 CXX test/cpp_headers/env_dpdk.o 00:04:22.776 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:23.034 LINK connect_stress 00:04:23.034 CXX test/cpp_headers/env.o 00:04:23.034 LINK boot_partition 00:04:23.034 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:23.034 LINK simple_copy 00:04:23.034 LINK blobcli 00:04:23.034 CXX test/cpp_headers/event.o 00:04:23.034 CC test/nvme/compliance/nvme_compliance.o 00:04:23.294 CXX test/cpp_headers/fd_group.o 00:04:23.294 CXX test/cpp_headers/fd.o 00:04:23.294 CC test/nvme/fused_ordering/fused_ordering.o 00:04:23.294 CC app/fio/nvme/fio_plugin.o 00:04:23.552 CXX test/cpp_headers/file.o 00:04:23.552 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:23.552 CC app/fio/bdev/fio_plugin.o 00:04:23.552 LINK vhost_fuzz 00:04:23.552 CC examples/nvme/hello_world/hello_world.o 00:04:23.552 CC test/nvme/fdp/fdp.o 00:04:23.810 LINK fused_ordering 00:04:23.810 LINK doorbell_aers 00:04:23.810 CXX test/cpp_headers/fsdev.o 00:04:23.810 CC test/app/jsoncat/jsoncat.o 00:04:23.810 LINK nvme_compliance 00:04:24.069 CXX test/cpp_headers/fsdev_module.o 00:04:24.069 CXX test/cpp_headers/ftl.o 00:04:24.069 LINK hello_world 00:04:24.069 CXX test/cpp_headers/fuse_dispatcher.o 00:04:24.069 LINK fdp 00:04:24.069 CXX test/cpp_headers/gpt_spec.o 00:04:24.327 LINK jsoncat 00:04:24.327 LINK spdk_nvme 00:04:24.327 CXX test/cpp_headers/hexlify.o 00:04:24.327 CXX test/cpp_headers/histogram_data.o 00:04:24.327 CXX test/cpp_headers/idxd.o 00:04:24.585 CC examples/nvme/reconnect/reconnect.o 00:04:24.585 CC test/app/stub/stub.o 00:04:24.585 CC examples/bdev/hello_world/hello_bdev.o 00:04:24.585 CC test/nvme/cuse/cuse.o 00:04:24.585 CC examples/bdev/bdevperf/bdevperf.o 00:04:24.585 LINK spdk_bdev 00:04:24.843 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:24.843 CXX test/cpp_headers/idxd_spec.o 00:04:24.843 CC examples/nvme/arbitration/arbitration.o 00:04:24.843 LINK stub 00:04:24.843 CC examples/nvme/hotplug/hotplug.o 00:04:25.101 LINK hello_bdev 00:04:25.101 CXX test/cpp_headers/init.o 00:04:25.101 CXX test/cpp_headers/ioat.o 00:04:25.101 LINK reconnect 00:04:25.358 CXX test/cpp_headers/ioat_spec.o 00:04:25.358 CXX test/cpp_headers/iscsi_spec.o 00:04:25.358 LINK hotplug 00:04:25.616 CXX test/cpp_headers/json.o 00:04:25.616 LINK arbitration 00:04:25.616 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:25.616 CXX test/cpp_headers/jsonrpc.o 00:04:25.616 CXX test/cpp_headers/keyring.o 00:04:25.616 CXX test/cpp_headers/keyring_module.o 00:04:25.875 CXX test/cpp_headers/likely.o 00:04:25.875 LINK nvme_manage 00:04:25.875 LINK cmb_copy 00:04:25.875 CXX test/cpp_headers/log.o 00:04:25.875 CC examples/nvme/abort/abort.o 00:04:25.875 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:25.875 CXX test/cpp_headers/lvol.o 00:04:26.133 CXX test/cpp_headers/md5.o 00:04:26.133 CXX test/cpp_headers/memory.o 00:04:26.133 CXX test/cpp_headers/mmio.o 00:04:26.133 CXX test/cpp_headers/nbd.o 00:04:26.391 CXX test/cpp_headers/net.o 00:04:26.391 CXX test/cpp_headers/notify.o 00:04:26.391 LINK pmr_persistence 00:04:26.391 LINK bdevperf 00:04:26.391 LINK cuse 00:04:26.391 CXX test/cpp_headers/nvme.o 00:04:26.649 CXX test/cpp_headers/nvme_intel.o 00:04:26.649 CXX test/cpp_headers/nvme_ocssd.o 00:04:26.649 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:26.649 CXX test/cpp_headers/nvme_spec.o 00:04:26.649 CXX test/cpp_headers/nvme_zns.o 00:04:26.649 LINK abort 00:04:26.907 CXX test/cpp_headers/nvmf_cmd.o 00:04:26.907 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:26.907 CXX test/cpp_headers/nvmf.o 00:04:26.907 CXX test/cpp_headers/nvmf_spec.o 00:04:26.907 CXX test/cpp_headers/nvmf_transport.o 00:04:26.907 CXX test/cpp_headers/opal.o 00:04:26.907 CXX test/cpp_headers/opal_spec.o 00:04:27.166 CXX test/cpp_headers/pci_ids.o 00:04:27.166 CXX test/cpp_headers/pipe.o 00:04:27.166 CXX test/cpp_headers/queue.o 00:04:27.166 CXX test/cpp_headers/reduce.o 00:04:27.166 CXX test/cpp_headers/rpc.o 00:04:27.166 CXX test/cpp_headers/scheduler.o 00:04:27.166 CXX test/cpp_headers/scsi.o 00:04:27.166 CXX test/cpp_headers/scsi_spec.o 00:04:27.424 CXX test/cpp_headers/sock.o 00:04:27.424 CXX test/cpp_headers/stdinc.o 00:04:27.424 CXX test/cpp_headers/string.o 00:04:27.424 CXX test/cpp_headers/thread.o 00:04:27.424 CC examples/nvmf/nvmf/nvmf.o 00:04:27.424 CXX test/cpp_headers/trace.o 00:04:27.424 CXX test/cpp_headers/trace_parser.o 00:04:27.683 CXX test/cpp_headers/tree.o 00:04:27.683 CXX test/cpp_headers/ublk.o 00:04:27.683 CXX test/cpp_headers/util.o 00:04:27.683 CXX test/cpp_headers/uuid.o 00:04:27.683 CXX test/cpp_headers/version.o 00:04:27.683 CXX test/cpp_headers/vfio_user_pci.o 00:04:27.683 CXX test/cpp_headers/vfio_user_spec.o 00:04:27.683 CXX test/cpp_headers/vhost.o 00:04:27.683 CXX test/cpp_headers/vmd.o 00:04:27.950 CXX test/cpp_headers/xor.o 00:04:27.950 CXX test/cpp_headers/zipf.o 00:04:28.207 LINK nvmf 00:04:30.738 LINK esnap 00:04:30.997 00:04:30.997 real 2m15.979s 00:04:30.997 user 13m43.454s 00:04:30.997 sys 2m14.354s 00:04:30.997 18:49:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:30.997 18:49:02 make -- common/autotest_common.sh@10 -- $ set +x 00:04:30.997 ************************************ 00:04:30.997 END TEST make 00:04:30.997 ************************************ 00:04:30.997 18:49:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:30.997 18:49:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:30.997 18:49:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:30.997 18:49:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.997 18:49:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:30.997 18:49:02 -- pm/common@44 -- $ pid=5331 00:04:30.997 18:49:02 -- pm/common@50 -- $ kill -TERM 5331 00:04:30.997 18:49:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.997 18:49:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:30.997 18:49:02 -- pm/common@44 -- $ pid=5333 00:04:30.997 18:49:02 -- pm/common@50 -- $ kill -TERM 5333 00:04:30.997 18:49:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:30.997 18:49:02 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:30.997 18:49:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.997 18:49:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.997 18:49:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.254 18:49:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.254 18:49:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.254 18:49:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.254 18:49:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.254 18:49:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.254 18:49:02 -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.254 18:49:02 -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.254 18:49:02 -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.254 18:49:02 -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.254 18:49:02 -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.254 18:49:02 -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.254 18:49:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.254 18:49:02 -- scripts/common.sh@344 -- # case "$op" in 00:04:31.254 18:49:02 -- scripts/common.sh@345 -- # : 1 00:04:31.254 18:49:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.254 18:49:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.254 18:49:02 -- scripts/common.sh@365 -- # decimal 1 00:04:31.254 18:49:02 -- scripts/common.sh@353 -- # local d=1 00:04:31.254 18:49:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.254 18:49:02 -- scripts/common.sh@355 -- # echo 1 00:04:31.254 18:49:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.254 18:49:02 -- scripts/common.sh@366 -- # decimal 2 00:04:31.254 18:49:02 -- scripts/common.sh@353 -- # local d=2 00:04:31.254 18:49:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.254 18:49:02 -- scripts/common.sh@355 -- # echo 2 00:04:31.254 18:49:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.254 18:49:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.254 18:49:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.254 18:49:02 -- scripts/common.sh@368 -- # return 0 00:04:31.254 18:49:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.254 18:49:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.254 --rc genhtml_branch_coverage=1 00:04:31.254 --rc genhtml_function_coverage=1 00:04:31.254 --rc genhtml_legend=1 00:04:31.254 --rc geninfo_all_blocks=1 00:04:31.254 --rc geninfo_unexecuted_blocks=1 00:04:31.254 00:04:31.254 ' 00:04:31.254 18:49:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.254 --rc genhtml_branch_coverage=1 00:04:31.254 --rc genhtml_function_coverage=1 00:04:31.254 --rc genhtml_legend=1 00:04:31.254 --rc geninfo_all_blocks=1 00:04:31.254 --rc geninfo_unexecuted_blocks=1 00:04:31.254 00:04:31.254 ' 00:04:31.254 18:49:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.254 --rc genhtml_branch_coverage=1 00:04:31.254 --rc genhtml_function_coverage=1 00:04:31.254 --rc genhtml_legend=1 00:04:31.254 --rc geninfo_all_blocks=1 00:04:31.254 --rc geninfo_unexecuted_blocks=1 00:04:31.254 00:04:31.254 ' 00:04:31.254 18:49:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.254 --rc genhtml_branch_coverage=1 00:04:31.254 --rc genhtml_function_coverage=1 00:04:31.254 --rc genhtml_legend=1 00:04:31.254 --rc geninfo_all_blocks=1 00:04:31.254 --rc geninfo_unexecuted_blocks=1 00:04:31.254 00:04:31.254 ' 00:04:31.254 18:49:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:31.254 18:49:02 -- nvmf/common.sh@7 -- # uname -s 00:04:31.254 18:49:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.254 18:49:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.254 18:49:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.254 18:49:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.254 18:49:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.254 18:49:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.254 18:49:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.254 18:49:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.254 18:49:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.254 18:49:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.254 18:49:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28610d4e-7ecc-4b99-9ad4-c89cbb8dd769 00:04:31.254 18:49:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=28610d4e-7ecc-4b99-9ad4-c89cbb8dd769 00:04:31.254 18:49:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.254 18:49:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.254 18:49:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.254 18:49:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.254 18:49:02 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:31.254 18:49:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.254 18:49:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.254 18:49:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.254 18:49:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.254 18:49:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.255 18:49:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.255 18:49:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.255 18:49:02 -- paths/export.sh@5 -- # export PATH 00:04:31.255 18:49:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.255 18:49:02 -- nvmf/common.sh@51 -- # : 0 00:04:31.255 18:49:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.255 18:49:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.255 18:49:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.255 18:49:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.255 18:49:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.255 18:49:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.255 18:49:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.255 18:49:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.255 18:49:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.255 18:49:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:31.255 18:49:02 -- spdk/autotest.sh@32 -- # uname -s 00:04:31.255 18:49:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:31.255 18:49:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:31.255 18:49:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:31.255 18:49:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:31.255 18:49:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:31.255 18:49:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:31.255 18:49:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:31.255 18:49:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:31.255 18:49:02 -- spdk/autotest.sh@48 -- # udevadm_pid=55280 00:04:31.255 18:49:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:31.255 18:49:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:31.255 18:49:02 -- pm/common@17 -- # local monitor 00:04:31.255 18:49:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.255 18:49:02 -- pm/common@21 -- # date +%s 00:04:31.255 18:49:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:31.255 18:49:02 -- pm/common@25 -- # sleep 1 00:04:31.255 18:49:02 -- pm/common@21 -- # date +%s 00:04:31.255 18:49:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732646942 00:04:31.255 18:49:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732646942 00:04:31.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732646942_collect-cpu-load.pm.log 00:04:31.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732646942_collect-vmstat.pm.log 00:04:32.191 18:49:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:32.191 18:49:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:32.191 18:49:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.191 18:49:03 -- common/autotest_common.sh@10 -- # set +x 00:04:32.191 18:49:03 -- spdk/autotest.sh@59 -- # create_test_list 00:04:32.191 18:49:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:32.191 18:49:03 -- common/autotest_common.sh@10 -- # set +x 00:04:32.191 18:49:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:32.191 18:49:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:32.449 18:49:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:32.449 18:49:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:32.449 18:49:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:32.450 18:49:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:32.450 18:49:03 -- common/autotest_common.sh@1457 -- # uname 00:04:32.450 18:49:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:32.450 18:49:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:32.450 18:49:03 -- common/autotest_common.sh@1477 -- # uname 00:04:32.450 18:49:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:32.450 18:49:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:32.450 18:49:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:32.450 lcov: LCOV version 1.15 00:04:32.450 18:49:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:54.393 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:54.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:09.264 18:49:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:09.264 18:49:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.264 18:49:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.522 18:49:40 -- spdk/autotest.sh@78 -- # rm -f 00:05:09.522 18:49:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.348 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:10.349 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:10.349 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:10.349 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:10.349 18:49:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:10.349 18:49:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:10.349 18:49:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:10.349 18:49:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.349 18:49:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:10.349 18:49:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:10.349 18:49:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.349 18:49:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:10.349 18:49:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.349 18:49:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.349 18:49:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:10.349 18:49:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:10.349 18:49:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:10.608 No valid GPT data, bailing 00:05:10.608 18:49:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.608 18:49:41 -- scripts/common.sh@394 -- # pt= 00:05:10.608 18:49:41 -- scripts/common.sh@395 -- # return 1 00:05:10.608 18:49:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:10.608 1+0 records in 00:05:10.608 1+0 records out 00:05:10.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00999339 s, 105 MB/s 00:05:10.608 18:49:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.608 18:49:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.608 18:49:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:10.608 18:49:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:10.608 18:49:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:10.608 No valid GPT data, bailing 00:05:10.608 18:49:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:10.608 18:49:41 -- scripts/common.sh@394 -- # pt= 00:05:10.608 18:49:41 -- scripts/common.sh@395 -- # return 1 00:05:10.608 18:49:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:10.608 1+0 records in 00:05:10.608 1+0 records out 00:05:10.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419909 s, 250 MB/s 00:05:10.608 18:49:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.608 18:49:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.608 18:49:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:10.608 18:49:41 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:10.608 18:49:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:10.608 No valid GPT data, bailing 00:05:10.608 18:49:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:10.608 18:49:41 -- scripts/common.sh@394 -- # pt= 00:05:10.608 18:49:41 -- scripts/common.sh@395 -- # return 1 00:05:10.608 18:49:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:10.608 1+0 records in 00:05:10.608 1+0 records out 00:05:10.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437243 s, 240 MB/s 00:05:10.608 18:49:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.608 18:49:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.608 18:49:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:10.608 18:49:41 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:10.608 18:49:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:10.866 No valid GPT data, bailing 00:05:10.866 18:49:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:10.866 18:49:41 -- scripts/common.sh@394 -- # pt= 00:05:10.866 18:49:41 -- scripts/common.sh@395 -- # return 1 00:05:10.866 18:49:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:10.866 1+0 records in 00:05:10.866 1+0 records out 00:05:10.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0035907 s, 292 MB/s 00:05:10.866 18:49:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.866 18:49:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.866 18:49:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:10.866 18:49:41 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:10.866 18:49:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:10.866 No valid GPT data, bailing 00:05:10.866 18:49:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:10.866 18:49:41 -- scripts/common.sh@394 -- # pt= 00:05:10.866 18:49:41 -- scripts/common.sh@395 -- # return 1 00:05:10.867 18:49:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:10.867 1+0 records in 00:05:10.867 1+0 records out 00:05:10.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040945 s, 256 MB/s 00:05:10.867 18:49:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.867 18:49:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.867 18:49:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:10.867 18:49:41 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:10.867 18:49:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:10.867 No valid GPT data, bailing 00:05:10.867 18:49:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:10.867 18:49:42 -- scripts/common.sh@394 -- # pt= 00:05:10.867 18:49:42 -- scripts/common.sh@395 -- # return 1 00:05:10.867 18:49:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:10.867 1+0 records in 00:05:10.867 1+0 records out 00:05:10.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397287 s, 264 MB/s 00:05:10.867 18:49:42 -- spdk/autotest.sh@105 -- # sync 00:05:10.867 18:49:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:10.867 18:49:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:10.867 18:49:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:12.768 18:49:43 -- spdk/autotest.sh@111 -- # uname -s 00:05:12.768 18:49:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:12.768 18:49:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:12.768 18:49:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:13.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.901 Hugepages 00:05:13.901 node hugesize free / total 00:05:13.901 node0 1048576kB 0 / 0 00:05:13.901 node0 2048kB 0 / 0 00:05:13.901 00:05:13.901 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.901 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:13.901 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:13.901 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:14.159 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:14.159 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:14.159 18:49:45 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.159 18:49:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.159 18:49:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.159 18:49:45 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.332 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.332 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.332 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.332 18:49:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:16.269 18:49:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:16.269 18:49:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:16.269 18:49:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.269 18:49:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:16.269 18:49:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:16.269 18:49:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:16.269 18:49:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.269 18:49:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.269 18:49:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:16.269 18:49:47 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:16.269 18:49:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.269 18:49:47 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.836 Waiting for block devices as requested 00:05:17.095 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.095 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.095 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.095 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:22.385 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:22.385 18:49:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:22.385 18:49:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:22.385 18:49:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.385 18:49:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.385 18:49:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.385 18:49:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1543 -- # continue 00:05:22.385 18:49:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.385 18:49:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.385 18:49:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.385 18:49:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1543 -- # continue 00:05:22.385 18:49:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.385 18:49:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.385 18:49:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.385 18:49:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1543 -- # continue 00:05:22.385 18:49:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:22.385 18:49:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:22.385 18:49:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:22.386 18:49:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:22.386 18:49:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:22.386 18:49:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:22.386 18:49:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:22.386 18:49:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:22.386 18:49:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:22.386 18:49:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:22.386 18:49:53 -- common/autotest_common.sh@1543 -- # continue 00:05:22.386 18:49:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:22.386 18:49:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.386 18:49:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.386 18:49:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:22.386 18:49:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.386 18:49:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.386 18:49:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.519 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.519 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.519 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.519 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.519 18:49:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:23.519 18:49:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.519 18:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.778 18:49:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:23.778 18:49:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:23.778 18:49:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.778 18:49:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:23.778 18:49:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:23.778 18:49:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:23.778 18:49:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:23.778 18:49:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:23.778 18:49:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:23.778 18:49:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:23.778 18:49:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.778 18:49:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.778 18:49:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:23.778 18:49:54 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:23.778 18:49:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:23.778 18:49:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.778 18:49:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.778 18:49:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.778 18:49:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.778 18:49:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.778 18:49:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.778 18:49:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:23.778 18:49:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:23.778 18:49:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.778 18:49:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:23.778 18:49:54 -- common/autotest_common.sh@1572 -- # return 0 00:05:23.778 18:49:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:23.778 18:49:54 -- common/autotest_common.sh@1580 -- # return 0 00:05:23.778 18:49:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:23.778 18:49:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:23.778 18:49:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.778 18:49:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.778 18:49:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:23.778 18:49:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.778 18:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.778 18:49:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:23.778 18:49:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.778 18:49:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.778 18:49:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.778 18:49:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.778 ************************************ 00:05:23.778 START TEST env 00:05:23.778 ************************************ 00:05:23.778 18:49:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.778 * Looking for test storage... 00:05:23.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:23.778 18:49:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.778 18:49:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.778 18:49:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.037 18:49:55 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.037 18:49:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.037 18:49:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.037 18:49:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.037 18:49:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.037 18:49:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.037 18:49:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.037 18:49:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.037 18:49:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.037 18:49:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.037 18:49:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.037 18:49:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.037 18:49:55 env -- scripts/common.sh@344 -- # case "$op" in 00:05:24.037 18:49:55 env -- scripts/common.sh@345 -- # : 1 00:05:24.037 18:49:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.037 18:49:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.037 18:49:55 env -- scripts/common.sh@365 -- # decimal 1 00:05:24.037 18:49:55 env -- scripts/common.sh@353 -- # local d=1 00:05:24.037 18:49:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.037 18:49:55 env -- scripts/common.sh@355 -- # echo 1 00:05:24.037 18:49:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.037 18:49:55 env -- scripts/common.sh@366 -- # decimal 2 00:05:24.037 18:49:55 env -- scripts/common.sh@353 -- # local d=2 00:05:24.037 18:49:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.037 18:49:55 env -- scripts/common.sh@355 -- # echo 2 00:05:24.037 18:49:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.037 18:49:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.037 18:49:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.037 18:49:55 env -- scripts/common.sh@368 -- # return 0 00:05:24.037 18:49:55 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.037 18:49:55 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.037 --rc genhtml_branch_coverage=1 00:05:24.037 --rc genhtml_function_coverage=1 00:05:24.037 --rc genhtml_legend=1 00:05:24.037 --rc geninfo_all_blocks=1 00:05:24.037 --rc geninfo_unexecuted_blocks=1 00:05:24.037 00:05:24.037 ' 00:05:24.037 18:49:55 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.037 --rc genhtml_branch_coverage=1 00:05:24.037 --rc genhtml_function_coverage=1 00:05:24.037 --rc genhtml_legend=1 00:05:24.037 --rc geninfo_all_blocks=1 00:05:24.037 --rc geninfo_unexecuted_blocks=1 00:05:24.037 00:05:24.037 ' 00:05:24.038 18:49:55 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.038 --rc genhtml_branch_coverage=1 00:05:24.038 --rc genhtml_function_coverage=1 00:05:24.038 --rc genhtml_legend=1 00:05:24.038 --rc geninfo_all_blocks=1 00:05:24.038 --rc geninfo_unexecuted_blocks=1 00:05:24.038 00:05:24.038 ' 00:05:24.038 18:49:55 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.038 --rc genhtml_branch_coverage=1 00:05:24.038 --rc genhtml_function_coverage=1 00:05:24.038 --rc genhtml_legend=1 00:05:24.038 --rc geninfo_all_blocks=1 00:05:24.038 --rc geninfo_unexecuted_blocks=1 00:05:24.038 00:05:24.038 ' 00:05:24.038 18:49:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.038 18:49:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.038 18:49:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.038 18:49:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 ************************************ 00:05:24.038 START TEST env_memory 00:05:24.038 ************************************ 00:05:24.038 18:49:55 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.038 00:05:24.038 00:05:24.038 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.038 http://cunit.sourceforge.net/ 00:05:24.038 00:05:24.038 00:05:24.038 Suite: memory 00:05:24.038 Test: alloc and free memory map ...[2024-11-26 18:49:55.121124] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:24.038 passed 00:05:24.038 Test: mem map translation ...[2024-11-26 18:49:55.170702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:24.038 [2024-11-26 18:49:55.170777] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:24.038 [2024-11-26 18:49:55.170862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:24.038 [2024-11-26 18:49:55.170891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.038 passed 00:05:24.038 Test: mem map registration ...[2024-11-26 18:49:55.250315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:24.038 [2024-11-26 18:49:55.250398] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:24.297 passed 00:05:24.297 Test: mem map adjacent registrations ...passed 00:05:24.297 00:05:24.297 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.297 suites 1 1 n/a 0 0 00:05:24.297 tests 4 4 4 0 0 00:05:24.297 asserts 152 152 152 0 n/a 00:05:24.297 00:05:24.297 Elapsed time = 0.296 seconds 00:05:24.297 00:05:24.297 real 0m0.335s 00:05:24.297 user 0m0.305s 00:05:24.297 sys 0m0.023s 00:05:24.297 18:49:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.297 18:49:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.297 ************************************ 00:05:24.297 END TEST env_memory 00:05:24.297 ************************************ 00:05:24.297 18:49:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.297 18:49:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.297 18:49:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.297 18:49:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.297 ************************************ 00:05:24.297 START TEST env_vtophys 00:05:24.297 ************************************ 00:05:24.297 18:49:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.297 EAL: lib.eal log level changed from notice to debug 00:05:24.297 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 1 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 2 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 3 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 4 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 5 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 6 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 7 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 8 as core 0 on socket 0 00:05:24.297 EAL: Detected lcore 9 as core 0 on socket 0 00:05:24.297 EAL: Maximum logical cores by configuration: 128 00:05:24.297 EAL: Detected CPU lcores: 10 00:05:24.297 EAL: Detected NUMA nodes: 1 00:05:24.297 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:24.297 EAL: Detected shared linkage of DPDK 00:05:24.297 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.556 EAL: Selected IOVA mode 'PA' 00:05:24.556 EAL: Probing VFIO support... 00:05:24.556 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.556 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:24.556 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.556 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.556 EAL: Setting up physically contiguous memory... 00:05:24.556 EAL: Setting maximum number of open files to 524288 00:05:24.556 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.556 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.556 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.556 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.556 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.556 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.556 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.556 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.556 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.556 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.556 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.556 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.556 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.556 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.556 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.556 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.556 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.556 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.556 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.556 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.556 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.556 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.556 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.556 EAL: Hugepages will be freed exactly as allocated. 00:05:24.556 EAL: No shared files mode enabled, IPC is disabled 00:05:24.556 EAL: No shared files mode enabled, IPC is disabled 00:05:24.556 EAL: TSC frequency is ~2200000 KHz 00:05:24.556 EAL: Main lcore 0 is ready (tid=7f9a192f1a40;cpuset=[0]) 00:05:24.556 EAL: Trying to obtain current memory policy. 00:05:24.556 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.556 EAL: Restoring previous memory policy: 0 00:05:24.556 EAL: request: mp_malloc_sync 00:05:24.556 EAL: No shared files mode enabled, IPC is disabled 00:05:24.556 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.556 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.556 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:24.556 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.556 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:24.556 00:05:24.556 00:05:24.556 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.556 http://cunit.sourceforge.net/ 00:05:24.556 00:05:24.556 00:05:24.556 Suite: components_suite 00:05:24.815 Test: vtophys_malloc_test ...passed 00:05:24.815 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.815 EAL: Restoring previous memory policy: 4 00:05:24.815 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.815 EAL: request: mp_malloc_sync 00:05:24.815 EAL: No shared files mode enabled, IPC is disabled 00:05:24.815 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.815 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.815 EAL: request: mp_malloc_sync 00:05:24.815 EAL: No shared files mode enabled, IPC is disabled 00:05:24.815 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.815 EAL: Trying to obtain current memory policy. 00:05:24.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.815 EAL: Restoring previous memory policy: 4 00:05:24.815 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.815 EAL: request: mp_malloc_sync 00:05:24.815 EAL: No shared files mode enabled, IPC is disabled 00:05:24.815 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.073 EAL: Trying to obtain current memory policy. 00:05:25.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.073 EAL: Restoring previous memory policy: 4 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.073 EAL: Trying to obtain current memory policy. 00:05:25.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.073 EAL: Restoring previous memory policy: 4 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.073 EAL: Trying to obtain current memory policy. 00:05:25.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.073 EAL: Restoring previous memory policy: 4 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.074 EAL: No shared files mode enabled, IPC is disabled 00:05:25.074 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.074 EAL: Trying to obtain current memory policy. 00:05:25.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.074 EAL: Restoring previous memory policy: 4 00:05:25.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.074 EAL: request: mp_malloc_sync 00:05:25.074 EAL: No shared files mode enabled, IPC is disabled 00:05:25.074 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.332 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.332 EAL: request: mp_malloc_sync 00:05:25.332 EAL: No shared files mode enabled, IPC is disabled 00:05:25.332 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.332 EAL: Trying to obtain current memory policy. 00:05:25.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.332 EAL: Restoring previous memory policy: 4 00:05:25.332 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.332 EAL: request: mp_malloc_sync 00:05:25.332 EAL: No shared files mode enabled, IPC is disabled 00:05:25.332 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.590 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.590 EAL: request: mp_malloc_sync 00:05:25.590 EAL: No shared files mode enabled, IPC is disabled 00:05:25.590 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.849 EAL: Trying to obtain current memory policy. 00:05:25.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.849 EAL: Restoring previous memory policy: 4 00:05:25.849 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.849 EAL: request: mp_malloc_sync 00:05:25.849 EAL: No shared files mode enabled, IPC is disabled 00:05:25.849 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.108 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.108 EAL: request: mp_malloc_sync 00:05:26.108 EAL: No shared files mode enabled, IPC is disabled 00:05:26.108 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.675 EAL: Trying to obtain current memory policy. 00:05:26.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.675 EAL: Restoring previous memory policy: 4 00:05:26.675 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.675 EAL: request: mp_malloc_sync 00:05:26.675 EAL: No shared files mode enabled, IPC is disabled 00:05:26.675 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.623 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.623 EAL: request: mp_malloc_sync 00:05:27.623 EAL: No shared files mode enabled, IPC is disabled 00:05:27.623 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.188 EAL: Trying to obtain current memory policy. 00:05:28.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.446 EAL: Restoring previous memory policy: 4 00:05:28.446 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.446 EAL: request: mp_malloc_sync 00:05:28.446 EAL: No shared files mode enabled, IPC is disabled 00:05:28.446 EAL: Heap on socket 0 was expanded by 1026MB 00:05:29.821 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.079 EAL: request: mp_malloc_sync 00:05:30.079 EAL: No shared files mode enabled, IPC is disabled 00:05:30.079 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.453 passed 00:05:31.453 00:05:31.453 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.453 suites 1 1 n/a 0 0 00:05:31.453 tests 2 2 2 0 0 00:05:31.453 asserts 5649 5649 5649 0 n/a 00:05:31.453 00:05:31.453 Elapsed time = 6.896 seconds 00:05:31.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.453 EAL: request: mp_malloc_sync 00:05:31.453 EAL: No shared files mode enabled, IPC is disabled 00:05:31.453 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.453 EAL: No shared files mode enabled, IPC is disabled 00:05:31.453 EAL: No shared files mode enabled, IPC is disabled 00:05:31.453 EAL: No shared files mode enabled, IPC is disabled 00:05:31.453 00:05:31.453 real 0m7.232s 00:05:31.453 user 0m6.396s 00:05:31.453 sys 0m0.664s 00:05:31.454 18:50:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.454 18:50:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.454 ************************************ 00:05:31.454 END TEST env_vtophys 00:05:31.454 ************************************ 00:05:31.712 18:50:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:31.712 18:50:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.712 18:50:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.712 18:50:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.712 ************************************ 00:05:31.712 START TEST env_pci 00:05:31.712 ************************************ 00:05:31.712 18:50:02 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:31.712 00:05:31.712 00:05:31.712 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.712 http://cunit.sourceforge.net/ 00:05:31.712 00:05:31.712 00:05:31.712 Suite: pci 00:05:31.712 Test: pci_hook ...[2024-11-26 18:50:02.740238] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58141 has claimed it 00:05:31.712 EAL: Cannot find device (10000:00:01.0) 00:05:31.712 EAL: Failed to attach device on primary process 00:05:31.712 passed 00:05:31.712 00:05:31.712 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.712 suites 1 1 n/a 0 0 00:05:31.712 tests 1 1 1 0 0 00:05:31.712 asserts 25 25 25 0 n/a 00:05:31.712 00:05:31.712 Elapsed time = 0.010 seconds 00:05:31.712 00:05:31.712 real 0m0.082s 00:05:31.712 user 0m0.044s 00:05:31.712 sys 0m0.037s 00:05:31.712 18:50:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.712 18:50:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.712 ************************************ 00:05:31.712 END TEST env_pci 00:05:31.712 ************************************ 00:05:31.712 18:50:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.712 18:50:02 env -- env/env.sh@15 -- # uname 00:05:31.712 18:50:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.712 18:50:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.712 18:50:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.712 18:50:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:31.712 18:50:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.712 18:50:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.712 ************************************ 00:05:31.712 START TEST env_dpdk_post_init 00:05:31.712 ************************************ 00:05:31.712 18:50:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.712 EAL: Detected CPU lcores: 10 00:05:31.712 EAL: Detected NUMA nodes: 1 00:05:31.712 EAL: Detected shared linkage of DPDK 00:05:31.970 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.970 EAL: Selected IOVA mode 'PA' 00:05:31.970 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:31.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:31.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:31.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:31.970 Starting DPDK initialization... 00:05:31.970 Starting SPDK post initialization... 00:05:31.970 SPDK NVMe probe 00:05:31.970 Attaching to 0000:00:10.0 00:05:31.970 Attaching to 0000:00:11.0 00:05:31.970 Attaching to 0000:00:12.0 00:05:31.970 Attaching to 0000:00:13.0 00:05:31.970 Attached to 0000:00:10.0 00:05:31.970 Attached to 0000:00:11.0 00:05:31.970 Attached to 0000:00:13.0 00:05:31.970 Attached to 0000:00:12.0 00:05:31.970 Cleaning up... 00:05:31.970 00:05:31.970 real 0m0.319s 00:05:31.970 user 0m0.127s 00:05:31.970 sys 0m0.092s 00:05:31.970 18:50:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.970 18:50:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.970 ************************************ 00:05:31.970 END TEST env_dpdk_post_init 00:05:31.970 ************************************ 00:05:32.229 18:50:03 env -- env/env.sh@26 -- # uname 00:05:32.229 18:50:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:32.229 18:50:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.229 18:50:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.229 18:50:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.229 18:50:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.229 ************************************ 00:05:32.229 START TEST env_mem_callbacks 00:05:32.229 ************************************ 00:05:32.229 18:50:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.229 EAL: Detected CPU lcores: 10 00:05:32.229 EAL: Detected NUMA nodes: 1 00:05:32.229 EAL: Detected shared linkage of DPDK 00:05:32.229 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.229 EAL: Selected IOVA mode 'PA' 00:05:32.229 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.229 00:05:32.229 00:05:32.229 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.229 http://cunit.sourceforge.net/ 00:05:32.229 00:05:32.229 00:05:32.229 Suite: memory 00:05:32.229 Test: test ... 00:05:32.229 register 0x200000200000 2097152 00:05:32.229 malloc 3145728 00:05:32.229 register 0x200000400000 4194304 00:05:32.229 buf 0x2000004fffc0 len 3145728 PASSED 00:05:32.229 malloc 64 00:05:32.229 buf 0x2000004ffec0 len 64 PASSED 00:05:32.229 malloc 4194304 00:05:32.229 register 0x200000800000 6291456 00:05:32.229 buf 0x2000009fffc0 len 4194304 PASSED 00:05:32.229 free 0x2000004fffc0 3145728 00:05:32.229 free 0x2000004ffec0 64 00:05:32.229 unregister 0x200000400000 4194304 PASSED 00:05:32.229 free 0x2000009fffc0 4194304 00:05:32.229 unregister 0x200000800000 6291456 PASSED 00:05:32.229 malloc 8388608 00:05:32.229 register 0x200000400000 10485760 00:05:32.229 buf 0x2000005fffc0 len 8388608 PASSED 00:05:32.229 free 0x2000005fffc0 8388608 00:05:32.229 unregister 0x200000400000 10485760 PASSED 00:05:32.229 passed 00:05:32.229 00:05:32.229 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.229 suites 1 1 n/a 0 0 00:05:32.229 tests 1 1 1 0 0 00:05:32.229 asserts 15 15 15 0 n/a 00:05:32.229 00:05:32.229 Elapsed time = 0.060 seconds 00:05:32.488 00:05:32.488 real 0m0.247s 00:05:32.488 user 0m0.091s 00:05:32.488 sys 0m0.055s 00:05:32.488 18:50:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.488 18:50:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:32.488 ************************************ 00:05:32.488 END TEST env_mem_callbacks 00:05:32.488 ************************************ 00:05:32.488 00:05:32.488 real 0m8.645s 00:05:32.488 user 0m7.137s 00:05:32.488 sys 0m1.118s 00:05:32.488 18:50:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.488 18:50:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.488 ************************************ 00:05:32.488 END TEST env 00:05:32.488 ************************************ 00:05:32.488 18:50:03 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:32.488 18:50:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.488 18:50:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.488 18:50:03 -- common/autotest_common.sh@10 -- # set +x 00:05:32.488 ************************************ 00:05:32.488 START TEST rpc 00:05:32.488 ************************************ 00:05:32.488 18:50:03 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:32.488 * Looking for test storage... 00:05:32.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.488 18:50:03 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.488 18:50:03 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.488 18:50:03 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.488 18:50:03 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.488 18:50:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.488 18:50:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.488 18:50:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.488 18:50:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.488 18:50:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.488 18:50:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.488 18:50:03 rpc -- scripts/common.sh@345 -- # : 1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.488 18:50:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.488 18:50:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.488 18:50:03 rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.488 18:50:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.488 18:50:03 rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.488 18:50:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.488 18:50:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.748 18:50:03 rpc -- scripts/common.sh@368 -- # return 0 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.748 --rc genhtml_branch_coverage=1 00:05:32.748 --rc genhtml_function_coverage=1 00:05:32.748 --rc genhtml_legend=1 00:05:32.748 --rc geninfo_all_blocks=1 00:05:32.748 --rc geninfo_unexecuted_blocks=1 00:05:32.748 00:05:32.748 ' 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.748 --rc genhtml_branch_coverage=1 00:05:32.748 --rc genhtml_function_coverage=1 00:05:32.748 --rc genhtml_legend=1 00:05:32.748 --rc geninfo_all_blocks=1 00:05:32.748 --rc geninfo_unexecuted_blocks=1 00:05:32.748 00:05:32.748 ' 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.748 --rc genhtml_branch_coverage=1 00:05:32.748 --rc genhtml_function_coverage=1 00:05:32.748 --rc genhtml_legend=1 00:05:32.748 --rc geninfo_all_blocks=1 00:05:32.748 --rc geninfo_unexecuted_blocks=1 00:05:32.748 00:05:32.748 ' 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.748 --rc genhtml_branch_coverage=1 00:05:32.748 --rc genhtml_function_coverage=1 00:05:32.748 --rc genhtml_legend=1 00:05:32.748 --rc geninfo_all_blocks=1 00:05:32.748 --rc geninfo_unexecuted_blocks=1 00:05:32.748 00:05:32.748 ' 00:05:32.748 18:50:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58268 00:05:32.748 18:50:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:32.748 18:50:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.748 18:50:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58268 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 58268 ']' 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.748 18:50:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.748 [2024-11-26 18:50:03.836858] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:05:32.748 [2024-11-26 18:50:03.837252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:05:33.008 [2024-11-26 18:50:04.017161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.008 [2024-11-26 18:50:04.155592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:33.008 [2024-11-26 18:50:04.155870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58268' to capture a snapshot of events at runtime. 00:05:33.008 [2024-11-26 18:50:04.155905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:33.008 [2024-11-26 18:50:04.155924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:33.008 [2024-11-26 18:50:04.155939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58268 for offline analysis/debug. 00:05:33.008 [2024-11-26 18:50:04.157417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.944 18:50:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.944 18:50:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.944 18:50:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.944 18:50:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.944 18:50:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:33.944 18:50:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:33.944 18:50:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.944 18:50:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.944 18:50:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.944 ************************************ 00:05:33.944 START TEST rpc_integrity 00:05:33.944 ************************************ 00:05:33.944 18:50:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:33.944 18:50:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.944 18:50:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.944 18:50:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.944 18:50:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.944 18:50:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.944 18:50:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.944 { 00:05:33.944 "name": "Malloc0", 00:05:33.944 "aliases": [ 00:05:33.944 "e3eb0fef-18bd-4f61-9731-5dc6fbe9b81f" 00:05:33.944 ], 00:05:33.944 "product_name": "Malloc disk", 00:05:33.944 "block_size": 512, 00:05:33.944 "num_blocks": 16384, 00:05:33.944 "uuid": "e3eb0fef-18bd-4f61-9731-5dc6fbe9b81f", 00:05:33.944 "assigned_rate_limits": { 00:05:33.944 "rw_ios_per_sec": 0, 00:05:33.944 "rw_mbytes_per_sec": 0, 00:05:33.944 "r_mbytes_per_sec": 0, 00:05:33.944 "w_mbytes_per_sec": 0 00:05:33.944 }, 00:05:33.944 "claimed": false, 00:05:33.944 "zoned": false, 00:05:33.944 "supported_io_types": { 00:05:33.944 "read": true, 00:05:33.944 "write": true, 00:05:33.944 "unmap": true, 00:05:33.944 "flush": true, 00:05:33.944 "reset": true, 00:05:33.944 "nvme_admin": false, 00:05:33.944 "nvme_io": false, 00:05:33.944 "nvme_io_md": false, 00:05:33.944 "write_zeroes": true, 00:05:33.944 "zcopy": true, 00:05:33.944 "get_zone_info": false, 00:05:33.944 "zone_management": false, 00:05:33.944 "zone_append": false, 00:05:33.944 "compare": false, 00:05:33.944 "compare_and_write": false, 00:05:33.944 "abort": true, 00:05:33.944 "seek_hole": false, 00:05:33.944 "seek_data": false, 00:05:33.944 "copy": true, 00:05:33.944 "nvme_iov_md": false 00:05:33.944 }, 00:05:33.944 "memory_domains": [ 00:05:33.944 { 00:05:33.944 "dma_device_id": "system", 00:05:33.944 "dma_device_type": 1 00:05:33.944 }, 00:05:33.944 { 00:05:33.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.944 "dma_device_type": 2 00:05:33.944 } 00:05:33.944 ], 00:05:33.944 "driver_specific": {} 00:05:33.944 } 00:05:33.944 ]' 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.944 [2024-11-26 18:50:05.100395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.944 [2024-11-26 18:50:05.100527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.944 [2024-11-26 18:50:05.100604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:33.944 [2024-11-26 18:50:05.100638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.944 [2024-11-26 18:50:05.103883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.944 [2024-11-26 18:50:05.103944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.944 Passthru0 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.944 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.944 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.944 { 00:05:33.944 "name": "Malloc0", 00:05:33.944 "aliases": [ 00:05:33.944 "e3eb0fef-18bd-4f61-9731-5dc6fbe9b81f" 00:05:33.944 ], 00:05:33.944 "product_name": "Malloc disk", 00:05:33.944 "block_size": 512, 00:05:33.944 "num_blocks": 16384, 00:05:33.944 "uuid": "e3eb0fef-18bd-4f61-9731-5dc6fbe9b81f", 00:05:33.944 "assigned_rate_limits": { 00:05:33.944 "rw_ios_per_sec": 0, 00:05:33.944 "rw_mbytes_per_sec": 0, 00:05:33.944 "r_mbytes_per_sec": 0, 00:05:33.944 "w_mbytes_per_sec": 0 00:05:33.944 }, 00:05:33.944 "claimed": true, 00:05:33.944 "claim_type": "exclusive_write", 00:05:33.945 "zoned": false, 00:05:33.945 "supported_io_types": { 00:05:33.945 "read": true, 00:05:33.945 "write": true, 00:05:33.945 "unmap": true, 00:05:33.945 "flush": true, 00:05:33.945 "reset": true, 00:05:33.945 "nvme_admin": false, 00:05:33.945 "nvme_io": false, 00:05:33.945 "nvme_io_md": false, 00:05:33.945 "write_zeroes": true, 00:05:33.945 "zcopy": true, 00:05:33.945 "get_zone_info": false, 00:05:33.945 "zone_management": false, 00:05:33.945 "zone_append": false, 00:05:33.945 "compare": false, 00:05:33.945 "compare_and_write": false, 00:05:33.945 "abort": true, 00:05:33.945 "seek_hole": false, 00:05:33.945 "seek_data": false, 00:05:33.945 "copy": true, 00:05:33.945 "nvme_iov_md": false 00:05:33.945 }, 00:05:33.945 "memory_domains": [ 00:05:33.945 { 00:05:33.945 "dma_device_id": "system", 00:05:33.945 "dma_device_type": 1 00:05:33.945 }, 00:05:33.945 { 00:05:33.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.945 "dma_device_type": 2 00:05:33.945 } 00:05:33.945 ], 00:05:33.945 "driver_specific": {} 00:05:33.945 }, 00:05:33.945 { 00:05:33.945 "name": "Passthru0", 00:05:33.945 "aliases": [ 00:05:33.945 "253a7bd6-053c-5855-98b1-aed8a038663c" 00:05:33.945 ], 00:05:33.945 "product_name": "passthru", 00:05:33.945 "block_size": 512, 00:05:33.945 "num_blocks": 16384, 00:05:33.945 "uuid": "253a7bd6-053c-5855-98b1-aed8a038663c", 00:05:33.945 "assigned_rate_limits": { 00:05:33.945 "rw_ios_per_sec": 0, 00:05:33.945 "rw_mbytes_per_sec": 0, 00:05:33.945 "r_mbytes_per_sec": 0, 00:05:33.945 "w_mbytes_per_sec": 0 00:05:33.945 }, 00:05:33.945 "claimed": false, 00:05:33.945 "zoned": false, 00:05:33.945 "supported_io_types": { 00:05:33.945 "read": true, 00:05:33.945 "write": true, 00:05:33.945 "unmap": true, 00:05:33.945 "flush": true, 00:05:33.945 "reset": true, 00:05:33.945 "nvme_admin": false, 00:05:33.945 "nvme_io": false, 00:05:33.945 "nvme_io_md": false, 00:05:33.945 "write_zeroes": true, 00:05:33.945 "zcopy": true, 00:05:33.945 "get_zone_info": false, 00:05:33.945 "zone_management": false, 00:05:33.945 "zone_append": false, 00:05:33.945 "compare": false, 00:05:33.945 "compare_and_write": false, 00:05:33.945 "abort": true, 00:05:33.945 "seek_hole": false, 00:05:33.945 "seek_data": false, 00:05:33.945 "copy": true, 00:05:33.945 "nvme_iov_md": false 00:05:33.945 }, 00:05:33.945 "memory_domains": [ 00:05:33.945 { 00:05:33.945 "dma_device_id": "system", 00:05:33.945 "dma_device_type": 1 00:05:33.945 }, 00:05:33.945 { 00:05:33.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.945 "dma_device_type": 2 00:05:33.945 } 00:05:33.945 ], 00:05:33.945 "driver_specific": { 00:05:33.945 "passthru": { 00:05:33.945 "name": "Passthru0", 00:05:33.945 "base_bdev_name": "Malloc0" 00:05:33.945 } 00:05:33.945 } 00:05:33.945 } 00:05:33.945 ]' 00:05:33.945 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.203 ************************************ 00:05:34.203 END TEST rpc_integrity 00:05:34.203 ************************************ 00:05:34.203 18:50:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.203 00:05:34.203 real 0m0.339s 00:05:34.203 user 0m0.200s 00:05:34.203 sys 0m0.041s 00:05:34.203 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.204 18:50:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.204 18:50:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.204 18:50:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.204 18:50:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.204 18:50:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.204 ************************************ 00:05:34.204 START TEST rpc_plugins 00:05:34.204 ************************************ 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.204 { 00:05:34.204 "name": "Malloc1", 00:05:34.204 "aliases": [ 00:05:34.204 "f5106efc-537a-4f17-b79a-c1af94b8eb88" 00:05:34.204 ], 00:05:34.204 "product_name": "Malloc disk", 00:05:34.204 "block_size": 4096, 00:05:34.204 "num_blocks": 256, 00:05:34.204 "uuid": "f5106efc-537a-4f17-b79a-c1af94b8eb88", 00:05:34.204 "assigned_rate_limits": { 00:05:34.204 "rw_ios_per_sec": 0, 00:05:34.204 "rw_mbytes_per_sec": 0, 00:05:34.204 "r_mbytes_per_sec": 0, 00:05:34.204 "w_mbytes_per_sec": 0 00:05:34.204 }, 00:05:34.204 "claimed": false, 00:05:34.204 "zoned": false, 00:05:34.204 "supported_io_types": { 00:05:34.204 "read": true, 00:05:34.204 "write": true, 00:05:34.204 "unmap": true, 00:05:34.204 "flush": true, 00:05:34.204 "reset": true, 00:05:34.204 "nvme_admin": false, 00:05:34.204 "nvme_io": false, 00:05:34.204 "nvme_io_md": false, 00:05:34.204 "write_zeroes": true, 00:05:34.204 "zcopy": true, 00:05:34.204 "get_zone_info": false, 00:05:34.204 "zone_management": false, 00:05:34.204 "zone_append": false, 00:05:34.204 "compare": false, 00:05:34.204 "compare_and_write": false, 00:05:34.204 "abort": true, 00:05:34.204 "seek_hole": false, 00:05:34.204 "seek_data": false, 00:05:34.204 "copy": true, 00:05:34.204 "nvme_iov_md": false 00:05:34.204 }, 00:05:34.204 "memory_domains": [ 00:05:34.204 { 00:05:34.204 "dma_device_id": "system", 00:05:34.204 "dma_device_type": 1 00:05:34.204 }, 00:05:34.204 { 00:05:34.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.204 "dma_device_type": 2 00:05:34.204 } 00:05:34.204 ], 00:05:34.204 "driver_specific": {} 00:05:34.204 } 00:05:34.204 ]' 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.204 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.204 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.462 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.462 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.462 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.462 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.462 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.462 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.462 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.462 ************************************ 00:05:34.462 END TEST rpc_plugins 00:05:34.462 ************************************ 00:05:34.462 18:50:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.462 00:05:34.462 real 0m0.153s 00:05:34.462 user 0m0.093s 00:05:34.462 sys 0m0.020s 00:05:34.462 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.462 18:50:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.462 18:50:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:34.462 18:50:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.462 18:50:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.462 18:50:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.462 ************************************ 00:05:34.462 START TEST rpc_trace_cmd_test 00:05:34.462 ************************************ 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:34.462 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58268", 00:05:34.462 "tpoint_group_mask": "0x8", 00:05:34.462 "iscsi_conn": { 00:05:34.462 "mask": "0x2", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "scsi": { 00:05:34.462 "mask": "0x4", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "bdev": { 00:05:34.462 "mask": "0x8", 00:05:34.462 "tpoint_mask": "0xffffffffffffffff" 00:05:34.462 }, 00:05:34.462 "nvmf_rdma": { 00:05:34.462 "mask": "0x10", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "nvmf_tcp": { 00:05:34.462 "mask": "0x20", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "ftl": { 00:05:34.462 "mask": "0x40", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "blobfs": { 00:05:34.462 "mask": "0x80", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "dsa": { 00:05:34.462 "mask": "0x200", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "thread": { 00:05:34.462 "mask": "0x400", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "nvme_pcie": { 00:05:34.462 "mask": "0x800", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "iaa": { 00:05:34.462 "mask": "0x1000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "nvme_tcp": { 00:05:34.462 "mask": "0x2000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "bdev_nvme": { 00:05:34.462 "mask": "0x4000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "sock": { 00:05:34.462 "mask": "0x8000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "blob": { 00:05:34.462 "mask": "0x10000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "bdev_raid": { 00:05:34.462 "mask": "0x20000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 }, 00:05:34.462 "scheduler": { 00:05:34.462 "mask": "0x40000", 00:05:34.462 "tpoint_mask": "0x0" 00:05:34.462 } 00:05:34.462 }' 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:34.462 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:34.720 ************************************ 00:05:34.720 END TEST rpc_trace_cmd_test 00:05:34.720 ************************************ 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:34.720 00:05:34.720 real 0m0.274s 00:05:34.720 user 0m0.237s 00:05:34.720 sys 0m0.025s 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.720 18:50:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 18:50:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:34.720 18:50:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:34.720 18:50:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:34.720 18:50:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.720 18:50:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.720 18:50:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 ************************************ 00:05:34.720 START TEST rpc_daemon_integrity 00:05:34.720 ************************************ 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.720 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.979 { 00:05:34.979 "name": "Malloc2", 00:05:34.979 "aliases": [ 00:05:34.979 "782db691-6998-466f-95b3-0635037ffb26" 00:05:34.979 ], 00:05:34.979 "product_name": "Malloc disk", 00:05:34.979 "block_size": 512, 00:05:34.979 "num_blocks": 16384, 00:05:34.979 "uuid": "782db691-6998-466f-95b3-0635037ffb26", 00:05:34.979 "assigned_rate_limits": { 00:05:34.979 "rw_ios_per_sec": 0, 00:05:34.979 "rw_mbytes_per_sec": 0, 00:05:34.979 "r_mbytes_per_sec": 0, 00:05:34.979 "w_mbytes_per_sec": 0 00:05:34.979 }, 00:05:34.979 "claimed": false, 00:05:34.979 "zoned": false, 00:05:34.979 "supported_io_types": { 00:05:34.979 "read": true, 00:05:34.979 "write": true, 00:05:34.979 "unmap": true, 00:05:34.979 "flush": true, 00:05:34.979 "reset": true, 00:05:34.979 "nvme_admin": false, 00:05:34.979 "nvme_io": false, 00:05:34.979 "nvme_io_md": false, 00:05:34.979 "write_zeroes": true, 00:05:34.979 "zcopy": true, 00:05:34.979 "get_zone_info": false, 00:05:34.979 "zone_management": false, 00:05:34.979 "zone_append": false, 00:05:34.979 "compare": false, 00:05:34.979 "compare_and_write": false, 00:05:34.979 "abort": true, 00:05:34.979 "seek_hole": false, 00:05:34.979 "seek_data": false, 00:05:34.979 "copy": true, 00:05:34.979 "nvme_iov_md": false 00:05:34.979 }, 00:05:34.979 "memory_domains": [ 00:05:34.979 { 00:05:34.979 "dma_device_id": "system", 00:05:34.979 "dma_device_type": 1 00:05:34.979 }, 00:05:34.979 { 00:05:34.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.979 "dma_device_type": 2 00:05:34.979 } 00:05:34.979 ], 00:05:34.979 "driver_specific": {} 00:05:34.979 } 00:05:34.979 ]' 00:05:34.979 18:50:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.979 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.979 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:34.979 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.979 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.979 [2024-11-26 18:50:06.012361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:34.979 [2024-11-26 18:50:06.012491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.979 [2024-11-26 18:50:06.012548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:34.979 [2024-11-26 18:50:06.012585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.979 [2024-11-26 18:50:06.016464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.979 [2024-11-26 18:50:06.016539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.979 Passthru0 00:05:34.979 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.979 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.980 { 00:05:34.980 "name": "Malloc2", 00:05:34.980 "aliases": [ 00:05:34.980 "782db691-6998-466f-95b3-0635037ffb26" 00:05:34.980 ], 00:05:34.980 "product_name": "Malloc disk", 00:05:34.980 "block_size": 512, 00:05:34.980 "num_blocks": 16384, 00:05:34.980 "uuid": "782db691-6998-466f-95b3-0635037ffb26", 00:05:34.980 "assigned_rate_limits": { 00:05:34.980 "rw_ios_per_sec": 0, 00:05:34.980 "rw_mbytes_per_sec": 0, 00:05:34.980 "r_mbytes_per_sec": 0, 00:05:34.980 "w_mbytes_per_sec": 0 00:05:34.980 }, 00:05:34.980 "claimed": true, 00:05:34.980 "claim_type": "exclusive_write", 00:05:34.980 "zoned": false, 00:05:34.980 "supported_io_types": { 00:05:34.980 "read": true, 00:05:34.980 "write": true, 00:05:34.980 "unmap": true, 00:05:34.980 "flush": true, 00:05:34.980 "reset": true, 00:05:34.980 "nvme_admin": false, 00:05:34.980 "nvme_io": false, 00:05:34.980 "nvme_io_md": false, 00:05:34.980 "write_zeroes": true, 00:05:34.980 "zcopy": true, 00:05:34.980 "get_zone_info": false, 00:05:34.980 "zone_management": false, 00:05:34.980 "zone_append": false, 00:05:34.980 "compare": false, 00:05:34.980 "compare_and_write": false, 00:05:34.980 "abort": true, 00:05:34.980 "seek_hole": false, 00:05:34.980 "seek_data": false, 00:05:34.980 "copy": true, 00:05:34.980 "nvme_iov_md": false 00:05:34.980 }, 00:05:34.980 "memory_domains": [ 00:05:34.980 { 00:05:34.980 "dma_device_id": "system", 00:05:34.980 "dma_device_type": 1 00:05:34.980 }, 00:05:34.980 { 00:05:34.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.980 "dma_device_type": 2 00:05:34.980 } 00:05:34.980 ], 00:05:34.980 "driver_specific": {} 00:05:34.980 }, 00:05:34.980 { 00:05:34.980 "name": "Passthru0", 00:05:34.980 "aliases": [ 00:05:34.980 "e767e07b-54bc-56f4-ac13-c5c57e500850" 00:05:34.980 ], 00:05:34.980 "product_name": "passthru", 00:05:34.980 "block_size": 512, 00:05:34.980 "num_blocks": 16384, 00:05:34.980 "uuid": "e767e07b-54bc-56f4-ac13-c5c57e500850", 00:05:34.980 "assigned_rate_limits": { 00:05:34.980 "rw_ios_per_sec": 0, 00:05:34.980 "rw_mbytes_per_sec": 0, 00:05:34.980 "r_mbytes_per_sec": 0, 00:05:34.980 "w_mbytes_per_sec": 0 00:05:34.980 }, 00:05:34.980 "claimed": false, 00:05:34.980 "zoned": false, 00:05:34.980 "supported_io_types": { 00:05:34.980 "read": true, 00:05:34.980 "write": true, 00:05:34.980 "unmap": true, 00:05:34.980 "flush": true, 00:05:34.980 "reset": true, 00:05:34.980 "nvme_admin": false, 00:05:34.980 "nvme_io": false, 00:05:34.980 "nvme_io_md": false, 00:05:34.980 "write_zeroes": true, 00:05:34.980 "zcopy": true, 00:05:34.980 "get_zone_info": false, 00:05:34.980 "zone_management": false, 00:05:34.980 "zone_append": false, 00:05:34.980 "compare": false, 00:05:34.980 "compare_and_write": false, 00:05:34.980 "abort": true, 00:05:34.980 "seek_hole": false, 00:05:34.980 "seek_data": false, 00:05:34.980 "copy": true, 00:05:34.980 "nvme_iov_md": false 00:05:34.980 }, 00:05:34.980 "memory_domains": [ 00:05:34.980 { 00:05:34.980 "dma_device_id": "system", 00:05:34.980 "dma_device_type": 1 00:05:34.980 }, 00:05:34.980 { 00:05:34.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.980 "dma_device_type": 2 00:05:34.980 } 00:05:34.980 ], 00:05:34.980 "driver_specific": { 00:05:34.980 "passthru": { 00:05:34.980 "name": "Passthru0", 00:05:34.980 "base_bdev_name": "Malloc2" 00:05:34.980 } 00:05:34.980 } 00:05:34.980 } 00:05:34.980 ]' 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.980 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.239 ************************************ 00:05:35.239 END TEST rpc_daemon_integrity 00:05:35.239 ************************************ 00:05:35.239 18:50:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.239 00:05:35.239 real 0m0.355s 00:05:35.239 user 0m0.217s 00:05:35.239 sys 0m0.040s 00:05:35.239 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.239 18:50:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.239 18:50:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.239 18:50:06 rpc -- rpc/rpc.sh@84 -- # killprocess 58268 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 58268 ']' 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@958 -- # kill -0 58268 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58268 00:05:35.239 killing process with pid 58268 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58268' 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@973 -- # kill 58268 00:05:35.239 18:50:06 rpc -- common/autotest_common.sh@978 -- # wait 58268 00:05:37.770 ************************************ 00:05:37.770 END TEST rpc 00:05:37.770 ************************************ 00:05:37.770 00:05:37.770 real 0m4.835s 00:05:37.770 user 0m5.585s 00:05:37.770 sys 0m0.731s 00:05:37.770 18:50:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.770 18:50:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.770 18:50:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:37.770 18:50:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.770 18:50:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.770 18:50:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.770 ************************************ 00:05:37.770 START TEST skip_rpc 00:05:37.770 ************************************ 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:37.770 * Looking for test storage... 00:05:37.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.770 18:50:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.770 --rc genhtml_branch_coverage=1 00:05:37.770 --rc genhtml_function_coverage=1 00:05:37.770 --rc genhtml_legend=1 00:05:37.770 --rc geninfo_all_blocks=1 00:05:37.770 --rc geninfo_unexecuted_blocks=1 00:05:37.770 00:05:37.770 ' 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.770 --rc genhtml_branch_coverage=1 00:05:37.770 --rc genhtml_function_coverage=1 00:05:37.770 --rc genhtml_legend=1 00:05:37.770 --rc geninfo_all_blocks=1 00:05:37.770 --rc geninfo_unexecuted_blocks=1 00:05:37.770 00:05:37.770 ' 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.770 --rc genhtml_branch_coverage=1 00:05:37.770 --rc genhtml_function_coverage=1 00:05:37.770 --rc genhtml_legend=1 00:05:37.770 --rc geninfo_all_blocks=1 00:05:37.770 --rc geninfo_unexecuted_blocks=1 00:05:37.770 00:05:37.770 ' 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.770 --rc genhtml_branch_coverage=1 00:05:37.770 --rc genhtml_function_coverage=1 00:05:37.770 --rc genhtml_legend=1 00:05:37.770 --rc geninfo_all_blocks=1 00:05:37.770 --rc geninfo_unexecuted_blocks=1 00:05:37.770 00:05:37.770 ' 00:05:37.770 18:50:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:37.770 18:50:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:37.770 18:50:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:37.770 18:50:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.771 18:50:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.771 18:50:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.771 ************************************ 00:05:37.771 START TEST skip_rpc 00:05:37.771 ************************************ 00:05:37.771 18:50:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:37.771 18:50:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58491 00:05:37.771 18:50:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.771 18:50:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:37.771 18:50:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:37.771 [2024-11-26 18:50:08.743799] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:05:37.771 [2024-11-26 18:50:08.744218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58491 ] 00:05:37.771 [2024-11-26 18:50:08.926353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.029 [2024-11-26 18:50:09.036229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58491 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58491 ']' 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58491 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58491 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58491' 00:05:43.297 killing process with pid 58491 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58491 00:05:43.297 18:50:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58491 00:05:44.707 00:05:44.707 real 0m7.201s 00:05:44.707 user 0m6.758s 00:05:44.707 sys 0m0.336s 00:05:44.707 18:50:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.707 18:50:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.707 ************************************ 00:05:44.707 END TEST skip_rpc 00:05:44.707 ************************************ 00:05:44.707 18:50:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:44.707 18:50:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.707 18:50:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.707 18:50:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.707 ************************************ 00:05:44.707 START TEST skip_rpc_with_json 00:05:44.707 ************************************ 00:05:44.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58601 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58601 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58601 ']' 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.707 18:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.965 [2024-11-26 18:50:15.971312] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:05:44.965 [2024-11-26 18:50:15.971659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58601 ] 00:05:44.965 [2024-11-26 18:50:16.144307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.223 [2024-11-26 18:50:16.247835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.160 [2024-11-26 18:50:17.011351] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:46.160 request: 00:05:46.160 { 00:05:46.160 "trtype": "tcp", 00:05:46.160 "method": "nvmf_get_transports", 00:05:46.160 "req_id": 1 00:05:46.160 } 00:05:46.160 Got JSON-RPC error response 00:05:46.160 response: 00:05:46.160 { 00:05:46.160 "code": -19, 00:05:46.160 "message": "No such device" 00:05:46.160 } 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.160 [2024-11-26 18:50:17.023505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.160 18:50:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.160 { 00:05:46.160 "subsystems": [ 00:05:46.160 { 00:05:46.160 "subsystem": "fsdev", 00:05:46.160 "config": [ 00:05:46.160 { 00:05:46.160 "method": "fsdev_set_opts", 00:05:46.160 "params": { 00:05:46.160 "fsdev_io_pool_size": 65535, 00:05:46.160 "fsdev_io_cache_size": 256 00:05:46.160 } 00:05:46.160 } 00:05:46.160 ] 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "subsystem": "keyring", 00:05:46.160 "config": [] 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "subsystem": "iobuf", 00:05:46.160 "config": [ 00:05:46.160 { 00:05:46.160 "method": "iobuf_set_options", 00:05:46.160 "params": { 00:05:46.160 "small_pool_count": 8192, 00:05:46.160 "large_pool_count": 1024, 00:05:46.160 "small_bufsize": 8192, 00:05:46.160 "large_bufsize": 135168, 00:05:46.160 "enable_numa": false 00:05:46.160 } 00:05:46.160 } 00:05:46.160 ] 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "subsystem": "sock", 00:05:46.160 "config": [ 00:05:46.160 { 00:05:46.160 "method": "sock_set_default_impl", 00:05:46.160 "params": { 00:05:46.160 "impl_name": "posix" 00:05:46.160 } 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "method": "sock_impl_set_options", 00:05:46.160 "params": { 00:05:46.160 "impl_name": "ssl", 00:05:46.160 "recv_buf_size": 4096, 00:05:46.160 "send_buf_size": 4096, 00:05:46.160 "enable_recv_pipe": true, 00:05:46.160 "enable_quickack": false, 00:05:46.160 "enable_placement_id": 0, 00:05:46.160 "enable_zerocopy_send_server": true, 00:05:46.160 "enable_zerocopy_send_client": false, 00:05:46.160 "zerocopy_threshold": 0, 00:05:46.160 "tls_version": 0, 00:05:46.160 "enable_ktls": false 00:05:46.160 } 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "method": "sock_impl_set_options", 00:05:46.160 "params": { 00:05:46.160 "impl_name": "posix", 00:05:46.160 "recv_buf_size": 2097152, 00:05:46.160 "send_buf_size": 2097152, 00:05:46.160 "enable_recv_pipe": true, 00:05:46.160 "enable_quickack": false, 00:05:46.160 "enable_placement_id": 0, 00:05:46.160 "enable_zerocopy_send_server": true, 00:05:46.160 "enable_zerocopy_send_client": false, 00:05:46.160 "zerocopy_threshold": 0, 00:05:46.160 "tls_version": 0, 00:05:46.160 "enable_ktls": false 00:05:46.160 } 00:05:46.160 } 00:05:46.160 ] 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "subsystem": "vmd", 00:05:46.160 "config": [] 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "subsystem": "accel", 00:05:46.160 "config": [ 00:05:46.160 { 00:05:46.160 "method": "accel_set_options", 00:05:46.160 "params": { 00:05:46.160 "small_cache_size": 128, 00:05:46.160 "large_cache_size": 16, 00:05:46.160 "task_count": 2048, 00:05:46.160 "sequence_count": 2048, 00:05:46.160 "buf_count": 2048 00:05:46.160 } 00:05:46.160 } 00:05:46.160 ] 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "subsystem": "bdev", 00:05:46.160 "config": [ 00:05:46.160 { 00:05:46.160 "method": "bdev_set_options", 00:05:46.160 "params": { 00:05:46.160 "bdev_io_pool_size": 65535, 00:05:46.160 "bdev_io_cache_size": 256, 00:05:46.160 "bdev_auto_examine": true, 00:05:46.160 "iobuf_small_cache_size": 128, 00:05:46.160 "iobuf_large_cache_size": 16 00:05:46.160 } 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "method": "bdev_raid_set_options", 00:05:46.160 "params": { 00:05:46.160 "process_window_size_kb": 1024, 00:05:46.160 "process_max_bandwidth_mb_sec": 0 00:05:46.160 } 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "method": "bdev_iscsi_set_options", 00:05:46.160 "params": { 00:05:46.160 "timeout_sec": 30 00:05:46.160 } 00:05:46.160 }, 00:05:46.160 { 00:05:46.160 "method": "bdev_nvme_set_options", 00:05:46.160 "params": { 00:05:46.160 "action_on_timeout": "none", 00:05:46.160 "timeout_us": 0, 00:05:46.160 "timeout_admin_us": 0, 00:05:46.160 "keep_alive_timeout_ms": 10000, 00:05:46.160 "arbitration_burst": 0, 00:05:46.160 "low_priority_weight": 0, 00:05:46.160 "medium_priority_weight": 0, 00:05:46.160 "high_priority_weight": 0, 00:05:46.160 "nvme_adminq_poll_period_us": 10000, 00:05:46.160 "nvme_ioq_poll_period_us": 0, 00:05:46.160 "io_queue_requests": 0, 00:05:46.160 "delay_cmd_submit": true, 00:05:46.160 "transport_retry_count": 4, 00:05:46.160 "bdev_retry_count": 3, 00:05:46.160 "transport_ack_timeout": 0, 00:05:46.160 "ctrlr_loss_timeout_sec": 0, 00:05:46.160 "reconnect_delay_sec": 0, 00:05:46.160 "fast_io_fail_timeout_sec": 0, 00:05:46.160 "disable_auto_failback": false, 00:05:46.160 "generate_uuids": false, 00:05:46.160 "transport_tos": 0, 00:05:46.160 "nvme_error_stat": false, 00:05:46.160 "rdma_srq_size": 0, 00:05:46.160 "io_path_stat": false, 00:05:46.160 "allow_accel_sequence": false, 00:05:46.160 "rdma_max_cq_size": 0, 00:05:46.160 "rdma_cm_event_timeout_ms": 0, 00:05:46.160 "dhchap_digests": [ 00:05:46.160 "sha256", 00:05:46.160 "sha384", 00:05:46.160 "sha512" 00:05:46.160 ], 00:05:46.160 "dhchap_dhgroups": [ 00:05:46.160 "null", 00:05:46.160 "ffdhe2048", 00:05:46.160 "ffdhe3072", 00:05:46.160 "ffdhe4096", 00:05:46.160 "ffdhe6144", 00:05:46.160 "ffdhe8192" 00:05:46.160 ] 00:05:46.160 } 00:05:46.160 }, 00:05:46.160 { 00:05:46.161 "method": "bdev_nvme_set_hotplug", 00:05:46.161 "params": { 00:05:46.161 "period_us": 100000, 00:05:46.161 "enable": false 00:05:46.161 } 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "method": "bdev_wait_for_examine" 00:05:46.161 } 00:05:46.161 ] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "scsi", 00:05:46.161 "config": null 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "scheduler", 00:05:46.161 "config": [ 00:05:46.161 { 00:05:46.161 "method": "framework_set_scheduler", 00:05:46.161 "params": { 00:05:46.161 "name": "static" 00:05:46.161 } 00:05:46.161 } 00:05:46.161 ] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "vhost_scsi", 00:05:46.161 "config": [] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "vhost_blk", 00:05:46.161 "config": [] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "ublk", 00:05:46.161 "config": [] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "nbd", 00:05:46.161 "config": [] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "nvmf", 00:05:46.161 "config": [ 00:05:46.161 { 00:05:46.161 "method": "nvmf_set_config", 00:05:46.161 "params": { 00:05:46.161 "discovery_filter": "match_any", 00:05:46.161 "admin_cmd_passthru": { 00:05:46.161 "identify_ctrlr": false 00:05:46.161 }, 00:05:46.161 "dhchap_digests": [ 00:05:46.161 "sha256", 00:05:46.161 "sha384", 00:05:46.161 "sha512" 00:05:46.161 ], 00:05:46.161 "dhchap_dhgroups": [ 00:05:46.161 "null", 00:05:46.161 "ffdhe2048", 00:05:46.161 "ffdhe3072", 00:05:46.161 "ffdhe4096", 00:05:46.161 "ffdhe6144", 00:05:46.161 "ffdhe8192" 00:05:46.161 ] 00:05:46.161 } 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "method": "nvmf_set_max_subsystems", 00:05:46.161 "params": { 00:05:46.161 "max_subsystems": 1024 00:05:46.161 } 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "method": "nvmf_set_crdt", 00:05:46.161 "params": { 00:05:46.161 "crdt1": 0, 00:05:46.161 "crdt2": 0, 00:05:46.161 "crdt3": 0 00:05:46.161 } 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "method": "nvmf_create_transport", 00:05:46.161 "params": { 00:05:46.161 "trtype": "TCP", 00:05:46.161 "max_queue_depth": 128, 00:05:46.161 "max_io_qpairs_per_ctrlr": 127, 00:05:46.161 "in_capsule_data_size": 4096, 00:05:46.161 "max_io_size": 131072, 00:05:46.161 "io_unit_size": 131072, 00:05:46.161 "max_aq_depth": 128, 00:05:46.161 "num_shared_buffers": 511, 00:05:46.161 "buf_cache_size": 4294967295, 00:05:46.161 "dif_insert_or_strip": false, 00:05:46.161 "zcopy": false, 00:05:46.161 "c2h_success": true, 00:05:46.161 "sock_priority": 0, 00:05:46.161 "abort_timeout_sec": 1, 00:05:46.161 "ack_timeout": 0, 00:05:46.161 "data_wr_pool_size": 0 00:05:46.161 } 00:05:46.161 } 00:05:46.161 ] 00:05:46.161 }, 00:05:46.161 { 00:05:46.161 "subsystem": "iscsi", 00:05:46.161 "config": [ 00:05:46.161 { 00:05:46.161 "method": "iscsi_set_options", 00:05:46.161 "params": { 00:05:46.161 "node_base": "iqn.2016-06.io.spdk", 00:05:46.161 "max_sessions": 128, 00:05:46.161 "max_connections_per_session": 2, 00:05:46.161 "max_queue_depth": 64, 00:05:46.161 "default_time2wait": 2, 00:05:46.161 "default_time2retain": 20, 00:05:46.161 "first_burst_length": 8192, 00:05:46.161 "immediate_data": true, 00:05:46.161 "allow_duplicated_isid": false, 00:05:46.161 "error_recovery_level": 0, 00:05:46.161 "nop_timeout": 60, 00:05:46.161 "nop_in_interval": 30, 00:05:46.161 "disable_chap": false, 00:05:46.161 "require_chap": false, 00:05:46.161 "mutual_chap": false, 00:05:46.161 "chap_group": 0, 00:05:46.161 "max_large_datain_per_connection": 64, 00:05:46.161 "max_r2t_per_connection": 4, 00:05:46.161 "pdu_pool_size": 36864, 00:05:46.161 "immediate_data_pool_size": 16384, 00:05:46.161 "data_out_pool_size": 2048 00:05:46.161 } 00:05:46.161 } 00:05:46.161 ] 00:05:46.161 } 00:05:46.161 ] 00:05:46.161 } 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58601 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58601 ']' 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58601 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58601 00:05:46.161 killing process with pid 58601 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58601' 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58601 00:05:46.161 18:50:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58601 00:05:48.706 18:50:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58646 00:05:48.706 18:50:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.706 18:50:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58646 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58646 ']' 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58646 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58646 00:05:53.972 killing process with pid 58646 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58646' 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58646 00:05:53.972 18:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58646 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.346 00:05:55.346 real 0m10.633s 00:05:55.346 user 0m10.359s 00:05:55.346 sys 0m0.725s 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 ************************************ 00:05:55.346 END TEST skip_rpc_with_json 00:05:55.346 ************************************ 00:05:55.346 18:50:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.346 18:50:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.346 18:50:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.346 18:50:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.346 ************************************ 00:05:55.346 START TEST skip_rpc_with_delay 00:05:55.346 ************************************ 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.346 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.347 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.347 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.347 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.347 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:55.347 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.606 [2024-11-26 18:50:26.672010] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.606 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:55.606 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.606 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.606 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.606 00:05:55.606 real 0m0.197s 00:05:55.606 user 0m0.110s 00:05:55.606 sys 0m0.084s 00:05:55.606 ************************************ 00:05:55.606 END TEST skip_rpc_with_delay 00:05:55.606 ************************************ 00:05:55.606 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.606 18:50:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.606 18:50:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.606 18:50:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.606 18:50:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.606 18:50:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.606 18:50:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.606 18:50:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.606 ************************************ 00:05:55.606 START TEST exit_on_failed_rpc_init 00:05:55.606 ************************************ 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58774 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58774 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58774 ']' 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.606 18:50:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.864 [2024-11-26 18:50:26.924985] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:05:55.864 [2024-11-26 18:50:26.925161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58774 ] 00:05:56.122 [2024-11-26 18:50:27.112962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.122 [2024-11-26 18:50:27.216148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:57.056 18:50:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.056 [2024-11-26 18:50:28.095104] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:05:57.056 [2024-11-26 18:50:28.095472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58792 ] 00:05:57.314 [2024-11-26 18:50:28.272016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.314 [2024-11-26 18:50:28.376256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.314 [2024-11-26 18:50:28.376383] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.314 [2024-11-26 18:50:28.376408] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.314 [2024-11-26 18:50:28.376439] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.571 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58774 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58774 ']' 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58774 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58774 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.572 killing process with pid 58774 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58774' 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58774 00:05:57.572 18:50:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58774 00:06:00.102 00:06:00.102 real 0m4.025s 00:06:00.102 user 0m4.518s 00:06:00.102 sys 0m0.521s 00:06:00.102 18:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.102 ************************************ 00:06:00.102 END TEST exit_on_failed_rpc_init 00:06:00.102 ************************************ 00:06:00.102 18:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 18:50:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.102 ************************************ 00:06:00.102 END TEST skip_rpc 00:06:00.102 ************************************ 00:06:00.102 00:06:00.102 real 0m22.450s 00:06:00.102 user 0m21.935s 00:06:00.102 sys 0m1.863s 00:06:00.102 18:50:30 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.102 18:50:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 18:50:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:00.102 18:50:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.102 18:50:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.102 18:50:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 ************************************ 00:06:00.102 START TEST rpc_client 00:06:00.102 ************************************ 00:06:00.102 18:50:30 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:00.102 * Looking for test storage... 00:06:00.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:00.102 18:50:30 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.102 18:50:30 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.102 18:50:30 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.102 18:50:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 18:50:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:00.102 OK 00:06:00.102 18:50:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.102 00:06:00.102 real 0m0.224s 00:06:00.102 user 0m0.125s 00:06:00.102 sys 0m0.104s 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.102 18:50:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 ************************************ 00:06:00.102 END TEST rpc_client 00:06:00.102 ************************************ 00:06:00.102 18:50:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:00.102 18:50:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.102 18:50:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.102 18:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 ************************************ 00:06:00.102 START TEST json_config 00:06:00.102 ************************************ 00:06:00.102 18:50:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:00.102 18:50:31 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.102 18:50:31 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.102 18:50:31 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.361 18:50:31 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.361 18:50:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.361 18:50:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.361 18:50:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.361 18:50:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.361 18:50:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.361 18:50:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:00.361 18:50:31 json_config -- scripts/common.sh@345 -- # : 1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.361 18:50:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.361 18:50:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@353 -- # local d=1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.361 18:50:31 json_config -- scripts/common.sh@355 -- # echo 1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.361 18:50:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@353 -- # local d=2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.361 18:50:31 json_config -- scripts/common.sh@355 -- # echo 2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.361 18:50:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.361 18:50:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.361 18:50:31 json_config -- scripts/common.sh@368 -- # return 0 00:06:00.361 18:50:31 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.361 18:50:31 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.361 --rc genhtml_branch_coverage=1 00:06:00.361 --rc genhtml_function_coverage=1 00:06:00.361 --rc genhtml_legend=1 00:06:00.361 --rc geninfo_all_blocks=1 00:06:00.361 --rc geninfo_unexecuted_blocks=1 00:06:00.361 00:06:00.361 ' 00:06:00.361 18:50:31 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.361 --rc genhtml_branch_coverage=1 00:06:00.361 --rc genhtml_function_coverage=1 00:06:00.361 --rc genhtml_legend=1 00:06:00.361 --rc geninfo_all_blocks=1 00:06:00.361 --rc geninfo_unexecuted_blocks=1 00:06:00.361 00:06:00.361 ' 00:06:00.361 18:50:31 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.361 --rc genhtml_branch_coverage=1 00:06:00.361 --rc genhtml_function_coverage=1 00:06:00.362 --rc genhtml_legend=1 00:06:00.362 --rc geninfo_all_blocks=1 00:06:00.362 --rc geninfo_unexecuted_blocks=1 00:06:00.362 00:06:00.362 ' 00:06:00.362 18:50:31 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.362 --rc genhtml_branch_coverage=1 00:06:00.362 --rc genhtml_function_coverage=1 00:06:00.362 --rc genhtml_legend=1 00:06:00.362 --rc geninfo_all_blocks=1 00:06:00.362 --rc geninfo_unexecuted_blocks=1 00:06:00.362 00:06:00.362 ' 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28610d4e-7ecc-4b99-9ad4-c89cbb8dd769 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=28610d4e-7ecc-4b99-9ad4-c89cbb8dd769 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.362 18:50:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.362 18:50:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.362 18:50:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.362 18:50:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.362 18:50:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.362 18:50:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.362 18:50:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.362 18:50:31 json_config -- paths/export.sh@5 -- # export PATH 00:06:00.362 18:50:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@51 -- # : 0 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.362 18:50:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.362 WARNING: No tests are enabled so not running JSON configuration tests 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:00.362 18:50:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:00.362 00:06:00.362 real 0m0.200s 00:06:00.362 user 0m0.127s 00:06:00.362 sys 0m0.072s 00:06:00.362 ************************************ 00:06:00.362 END TEST json_config 00:06:00.362 ************************************ 00:06:00.362 18:50:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.362 18:50:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.362 18:50:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:00.362 18:50:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.362 18:50:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.362 18:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.362 ************************************ 00:06:00.362 START TEST json_config_extra_key 00:06:00.362 ************************************ 00:06:00.362 18:50:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:00.362 18:50:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.362 18:50:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.362 18:50:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.621 18:50:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.621 18:50:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:00.621 18:50:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.621 18:50:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.622 --rc genhtml_branch_coverage=1 00:06:00.622 --rc genhtml_function_coverage=1 00:06:00.622 --rc genhtml_legend=1 00:06:00.622 --rc geninfo_all_blocks=1 00:06:00.622 --rc geninfo_unexecuted_blocks=1 00:06:00.622 00:06:00.622 ' 00:06:00.622 18:50:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.622 --rc genhtml_branch_coverage=1 00:06:00.622 --rc genhtml_function_coverage=1 00:06:00.622 --rc genhtml_legend=1 00:06:00.622 --rc geninfo_all_blocks=1 00:06:00.622 --rc geninfo_unexecuted_blocks=1 00:06:00.622 00:06:00.622 ' 00:06:00.622 18:50:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.622 --rc genhtml_branch_coverage=1 00:06:00.622 --rc genhtml_function_coverage=1 00:06:00.622 --rc genhtml_legend=1 00:06:00.622 --rc geninfo_all_blocks=1 00:06:00.622 --rc geninfo_unexecuted_blocks=1 00:06:00.622 00:06:00.622 ' 00:06:00.622 18:50:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.622 --rc genhtml_branch_coverage=1 00:06:00.622 --rc genhtml_function_coverage=1 00:06:00.622 --rc genhtml_legend=1 00:06:00.622 --rc geninfo_all_blocks=1 00:06:00.622 --rc geninfo_unexecuted_blocks=1 00:06:00.622 00:06:00.622 ' 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28610d4e-7ecc-4b99-9ad4-c89cbb8dd769 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=28610d4e-7ecc-4b99-9ad4-c89cbb8dd769 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.622 18:50:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.622 18:50:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.622 18:50:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.622 18:50:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.622 18:50:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 18:50:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 18:50:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 18:50:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:00.622 18:50:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.622 18:50:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:00.622 INFO: launching applications... 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:00.622 18:50:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.622 18:50:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.623 18:50:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59002 00:06:00.623 18:50:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:00.623 18:50:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.623 Waiting for target to run... 00:06:00.623 18:50:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59002 /var/tmp/spdk_tgt.sock 00:06:00.623 18:50:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59002 ']' 00:06:00.623 18:50:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.623 18:50:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.623 18:50:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.623 18:50:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.623 18:50:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.623 [2024-11-26 18:50:31.726379] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:00.623 [2024-11-26 18:50:31.726542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:06:00.881 [2024-11-26 18:50:32.053605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.140 [2024-11-26 18:50:32.171091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.077 00:06:02.077 INFO: shutting down applications... 00:06:02.077 18:50:32 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.077 18:50:32 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:02.077 18:50:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:02.077 18:50:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59002 ]] 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59002 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59002 00:06:02.077 18:50:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.335 18:50:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.335 18:50:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.335 18:50:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59002 00:06:02.335 18:50:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.903 18:50:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.903 18:50:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.903 18:50:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59002 00:06:02.903 18:50:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.469 18:50:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.469 18:50:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.469 18:50:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59002 00:06:03.469 18:50:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.037 18:50:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.037 18:50:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.037 18:50:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59002 00:06:04.037 18:50:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59002 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.296 SPDK target shutdown done 00:06:04.296 Success 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.296 18:50:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.296 18:50:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.296 00:06:04.296 real 0m4.026s 00:06:04.296 user 0m3.997s 00:06:04.296 sys 0m0.462s 00:06:04.296 ************************************ 00:06:04.296 END TEST json_config_extra_key 00:06:04.296 ************************************ 00:06:04.296 18:50:35 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.296 18:50:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.296 18:50:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.296 18:50:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.296 18:50:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.296 18:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.296 ************************************ 00:06:04.296 START TEST alias_rpc 00:06:04.296 ************************************ 00:06:04.296 18:50:35 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.555 * Looking for test storage... 00:06:04.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.555 18:50:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.555 --rc genhtml_branch_coverage=1 00:06:04.555 --rc genhtml_function_coverage=1 00:06:04.555 --rc genhtml_legend=1 00:06:04.555 --rc geninfo_all_blocks=1 00:06:04.555 --rc geninfo_unexecuted_blocks=1 00:06:04.555 00:06:04.555 ' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.555 --rc genhtml_branch_coverage=1 00:06:04.555 --rc genhtml_function_coverage=1 00:06:04.555 --rc genhtml_legend=1 00:06:04.555 --rc geninfo_all_blocks=1 00:06:04.555 --rc geninfo_unexecuted_blocks=1 00:06:04.555 00:06:04.555 ' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.555 --rc genhtml_branch_coverage=1 00:06:04.555 --rc genhtml_function_coverage=1 00:06:04.555 --rc genhtml_legend=1 00:06:04.555 --rc geninfo_all_blocks=1 00:06:04.555 --rc geninfo_unexecuted_blocks=1 00:06:04.555 00:06:04.555 ' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.555 --rc genhtml_branch_coverage=1 00:06:04.555 --rc genhtml_function_coverage=1 00:06:04.555 --rc genhtml_legend=1 00:06:04.555 --rc geninfo_all_blocks=1 00:06:04.555 --rc geninfo_unexecuted_blocks=1 00:06:04.555 00:06:04.555 ' 00:06:04.555 18:50:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.555 18:50:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59107 00:06:04.555 18:50:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59107 00:06:04.555 18:50:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59107 ']' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.555 18:50:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.814 [2024-11-26 18:50:35.838413] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:04.814 [2024-11-26 18:50:35.839310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:06:05.072 [2024-11-26 18:50:36.093502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.072 [2024-11-26 18:50:36.209140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.097 18:50:36 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.097 18:50:36 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.097 18:50:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:06.097 18:50:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59107 00:06:06.097 18:50:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59107 ']' 00:06:06.097 18:50:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59107 00:06:06.097 18:50:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59107 00:06:06.356 killing process with pid 59107 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59107' 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 59107 00:06:06.356 18:50:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 59107 00:06:08.257 ************************************ 00:06:08.257 END TEST alias_rpc 00:06:08.257 ************************************ 00:06:08.257 00:06:08.257 real 0m3.935s 00:06:08.257 user 0m4.210s 00:06:08.257 sys 0m0.511s 00:06:08.257 18:50:39 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.257 18:50:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 18:50:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:08.515 18:50:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:08.515 18:50:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.515 18:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.515 18:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 ************************************ 00:06:08.515 START TEST spdkcli_tcp 00:06:08.515 ************************************ 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:08.515 * Looking for test storage... 00:06:08.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.515 18:50:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.515 --rc genhtml_branch_coverage=1 00:06:08.515 --rc genhtml_function_coverage=1 00:06:08.515 --rc genhtml_legend=1 00:06:08.515 --rc geninfo_all_blocks=1 00:06:08.515 --rc geninfo_unexecuted_blocks=1 00:06:08.515 00:06:08.515 ' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.515 --rc genhtml_branch_coverage=1 00:06:08.515 --rc genhtml_function_coverage=1 00:06:08.515 --rc genhtml_legend=1 00:06:08.515 --rc geninfo_all_blocks=1 00:06:08.515 --rc geninfo_unexecuted_blocks=1 00:06:08.515 00:06:08.515 ' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.515 --rc genhtml_branch_coverage=1 00:06:08.515 --rc genhtml_function_coverage=1 00:06:08.515 --rc genhtml_legend=1 00:06:08.515 --rc geninfo_all_blocks=1 00:06:08.515 --rc geninfo_unexecuted_blocks=1 00:06:08.515 00:06:08.515 ' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.515 --rc genhtml_branch_coverage=1 00:06:08.515 --rc genhtml_function_coverage=1 00:06:08.515 --rc genhtml_legend=1 00:06:08.515 --rc geninfo_all_blocks=1 00:06:08.515 --rc geninfo_unexecuted_blocks=1 00:06:08.515 00:06:08.515 ' 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59214 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59214 00:06:08.515 18:50:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.515 18:50:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 [2024-11-26 18:50:39.802483] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:08.774 [2024-11-26 18:50:39.802663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:06:08.774 [2024-11-26 18:50:39.984225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.032 [2024-11-26 18:50:40.092878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.032 [2024-11-26 18:50:40.092887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.969 18:50:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.969 18:50:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:09.969 18:50:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59231 00:06:09.969 18:50:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.969 18:50:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:09.969 [ 00:06:09.969 "bdev_malloc_delete", 00:06:09.969 "bdev_malloc_create", 00:06:09.969 "bdev_null_resize", 00:06:09.969 "bdev_null_delete", 00:06:09.969 "bdev_null_create", 00:06:09.969 "bdev_nvme_cuse_unregister", 00:06:09.969 "bdev_nvme_cuse_register", 00:06:09.969 "bdev_opal_new_user", 00:06:09.969 "bdev_opal_set_lock_state", 00:06:09.969 "bdev_opal_delete", 00:06:09.969 "bdev_opal_get_info", 00:06:09.969 "bdev_opal_create", 00:06:09.969 "bdev_nvme_opal_revert", 00:06:09.969 "bdev_nvme_opal_init", 00:06:09.969 "bdev_nvme_send_cmd", 00:06:09.969 "bdev_nvme_set_keys", 00:06:09.969 "bdev_nvme_get_path_iostat", 00:06:09.969 "bdev_nvme_get_mdns_discovery_info", 00:06:09.969 "bdev_nvme_stop_mdns_discovery", 00:06:09.969 "bdev_nvme_start_mdns_discovery", 00:06:09.969 "bdev_nvme_set_multipath_policy", 00:06:09.969 "bdev_nvme_set_preferred_path", 00:06:09.969 "bdev_nvme_get_io_paths", 00:06:09.969 "bdev_nvme_remove_error_injection", 00:06:09.969 "bdev_nvme_add_error_injection", 00:06:09.969 "bdev_nvme_get_discovery_info", 00:06:09.969 "bdev_nvme_stop_discovery", 00:06:09.969 "bdev_nvme_start_discovery", 00:06:09.969 "bdev_nvme_get_controller_health_info", 00:06:09.969 "bdev_nvme_disable_controller", 00:06:09.969 "bdev_nvme_enable_controller", 00:06:09.969 "bdev_nvme_reset_controller", 00:06:09.969 "bdev_nvme_get_transport_statistics", 00:06:09.969 "bdev_nvme_apply_firmware", 00:06:09.969 "bdev_nvme_detach_controller", 00:06:09.969 "bdev_nvme_get_controllers", 00:06:09.969 "bdev_nvme_attach_controller", 00:06:09.969 "bdev_nvme_set_hotplug", 00:06:09.969 "bdev_nvme_set_options", 00:06:09.969 "bdev_passthru_delete", 00:06:09.969 "bdev_passthru_create", 00:06:09.969 "bdev_lvol_set_parent_bdev", 00:06:09.969 "bdev_lvol_set_parent", 00:06:09.969 "bdev_lvol_check_shallow_copy", 00:06:09.969 "bdev_lvol_start_shallow_copy", 00:06:09.969 "bdev_lvol_grow_lvstore", 00:06:09.969 "bdev_lvol_get_lvols", 00:06:09.969 "bdev_lvol_get_lvstores", 00:06:09.969 "bdev_lvol_delete", 00:06:09.969 "bdev_lvol_set_read_only", 00:06:09.969 "bdev_lvol_resize", 00:06:09.969 "bdev_lvol_decouple_parent", 00:06:09.969 "bdev_lvol_inflate", 00:06:09.969 "bdev_lvol_rename", 00:06:09.969 "bdev_lvol_clone_bdev", 00:06:09.969 "bdev_lvol_clone", 00:06:09.969 "bdev_lvol_snapshot", 00:06:09.969 "bdev_lvol_create", 00:06:09.969 "bdev_lvol_delete_lvstore", 00:06:09.969 "bdev_lvol_rename_lvstore", 00:06:09.969 "bdev_lvol_create_lvstore", 00:06:09.969 "bdev_raid_set_options", 00:06:09.969 "bdev_raid_remove_base_bdev", 00:06:09.969 "bdev_raid_add_base_bdev", 00:06:09.969 "bdev_raid_delete", 00:06:09.969 "bdev_raid_create", 00:06:09.969 "bdev_raid_get_bdevs", 00:06:09.969 "bdev_error_inject_error", 00:06:09.969 "bdev_error_delete", 00:06:09.969 "bdev_error_create", 00:06:09.969 "bdev_split_delete", 00:06:09.969 "bdev_split_create", 00:06:09.969 "bdev_delay_delete", 00:06:09.969 "bdev_delay_create", 00:06:09.969 "bdev_delay_update_latency", 00:06:09.969 "bdev_zone_block_delete", 00:06:09.969 "bdev_zone_block_create", 00:06:09.969 "blobfs_create", 00:06:09.969 "blobfs_detect", 00:06:09.969 "blobfs_set_cache_size", 00:06:09.969 "bdev_xnvme_delete", 00:06:09.969 "bdev_xnvme_create", 00:06:09.969 "bdev_aio_delete", 00:06:09.969 "bdev_aio_rescan", 00:06:09.969 "bdev_aio_create", 00:06:09.969 "bdev_ftl_set_property", 00:06:09.969 "bdev_ftl_get_properties", 00:06:09.969 "bdev_ftl_get_stats", 00:06:09.969 "bdev_ftl_unmap", 00:06:09.969 "bdev_ftl_unload", 00:06:09.969 "bdev_ftl_delete", 00:06:09.969 "bdev_ftl_load", 00:06:09.969 "bdev_ftl_create", 00:06:09.969 "bdev_virtio_attach_controller", 00:06:09.969 "bdev_virtio_scsi_get_devices", 00:06:09.969 "bdev_virtio_detach_controller", 00:06:09.969 "bdev_virtio_blk_set_hotplug", 00:06:09.969 "bdev_iscsi_delete", 00:06:09.969 "bdev_iscsi_create", 00:06:09.969 "bdev_iscsi_set_options", 00:06:09.969 "accel_error_inject_error", 00:06:09.969 "ioat_scan_accel_module", 00:06:09.969 "dsa_scan_accel_module", 00:06:09.969 "iaa_scan_accel_module", 00:06:09.969 "keyring_file_remove_key", 00:06:09.969 "keyring_file_add_key", 00:06:09.969 "keyring_linux_set_options", 00:06:09.969 "fsdev_aio_delete", 00:06:09.969 "fsdev_aio_create", 00:06:09.969 "iscsi_get_histogram", 00:06:09.969 "iscsi_enable_histogram", 00:06:09.969 "iscsi_set_options", 00:06:09.969 "iscsi_get_auth_groups", 00:06:09.969 "iscsi_auth_group_remove_secret", 00:06:09.969 "iscsi_auth_group_add_secret", 00:06:09.969 "iscsi_delete_auth_group", 00:06:09.969 "iscsi_create_auth_group", 00:06:09.969 "iscsi_set_discovery_auth", 00:06:09.969 "iscsi_get_options", 00:06:09.969 "iscsi_target_node_request_logout", 00:06:09.969 "iscsi_target_node_set_redirect", 00:06:09.969 "iscsi_target_node_set_auth", 00:06:09.969 "iscsi_target_node_add_lun", 00:06:09.969 "iscsi_get_stats", 00:06:09.969 "iscsi_get_connections", 00:06:09.969 "iscsi_portal_group_set_auth", 00:06:09.969 "iscsi_start_portal_group", 00:06:09.969 "iscsi_delete_portal_group", 00:06:09.969 "iscsi_create_portal_group", 00:06:09.969 "iscsi_get_portal_groups", 00:06:09.969 "iscsi_delete_target_node", 00:06:09.970 "iscsi_target_node_remove_pg_ig_maps", 00:06:09.970 "iscsi_target_node_add_pg_ig_maps", 00:06:09.970 "iscsi_create_target_node", 00:06:09.970 "iscsi_get_target_nodes", 00:06:09.970 "iscsi_delete_initiator_group", 00:06:09.970 "iscsi_initiator_group_remove_initiators", 00:06:09.970 "iscsi_initiator_group_add_initiators", 00:06:09.970 "iscsi_create_initiator_group", 00:06:09.970 "iscsi_get_initiator_groups", 00:06:09.970 "nvmf_set_crdt", 00:06:09.970 "nvmf_set_config", 00:06:09.970 "nvmf_set_max_subsystems", 00:06:09.970 "nvmf_stop_mdns_prr", 00:06:09.970 "nvmf_publish_mdns_prr", 00:06:09.970 "nvmf_subsystem_get_listeners", 00:06:09.970 "nvmf_subsystem_get_qpairs", 00:06:09.970 "nvmf_subsystem_get_controllers", 00:06:09.970 "nvmf_get_stats", 00:06:09.970 "nvmf_get_transports", 00:06:09.970 "nvmf_create_transport", 00:06:09.970 "nvmf_get_targets", 00:06:09.970 "nvmf_delete_target", 00:06:09.970 "nvmf_create_target", 00:06:09.970 "nvmf_subsystem_allow_any_host", 00:06:09.970 "nvmf_subsystem_set_keys", 00:06:09.970 "nvmf_subsystem_remove_host", 00:06:09.970 "nvmf_subsystem_add_host", 00:06:09.970 "nvmf_ns_remove_host", 00:06:09.970 "nvmf_ns_add_host", 00:06:09.970 "nvmf_subsystem_remove_ns", 00:06:09.970 "nvmf_subsystem_set_ns_ana_group", 00:06:09.970 "nvmf_subsystem_add_ns", 00:06:09.970 "nvmf_subsystem_listener_set_ana_state", 00:06:09.970 "nvmf_discovery_get_referrals", 00:06:09.970 "nvmf_discovery_remove_referral", 00:06:09.970 "nvmf_discovery_add_referral", 00:06:09.970 "nvmf_subsystem_remove_listener", 00:06:09.970 "nvmf_subsystem_add_listener", 00:06:09.970 "nvmf_delete_subsystem", 00:06:09.970 "nvmf_create_subsystem", 00:06:09.970 "nvmf_get_subsystems", 00:06:09.970 "env_dpdk_get_mem_stats", 00:06:09.970 "nbd_get_disks", 00:06:09.970 "nbd_stop_disk", 00:06:09.970 "nbd_start_disk", 00:06:09.970 "ublk_recover_disk", 00:06:09.970 "ublk_get_disks", 00:06:09.970 "ublk_stop_disk", 00:06:09.970 "ublk_start_disk", 00:06:09.970 "ublk_destroy_target", 00:06:09.970 "ublk_create_target", 00:06:09.970 "virtio_blk_create_transport", 00:06:09.970 "virtio_blk_get_transports", 00:06:09.970 "vhost_controller_set_coalescing", 00:06:09.970 "vhost_get_controllers", 00:06:09.970 "vhost_delete_controller", 00:06:09.970 "vhost_create_blk_controller", 00:06:09.970 "vhost_scsi_controller_remove_target", 00:06:09.970 "vhost_scsi_controller_add_target", 00:06:09.970 "vhost_start_scsi_controller", 00:06:09.970 "vhost_create_scsi_controller", 00:06:09.970 "thread_set_cpumask", 00:06:09.970 "scheduler_set_options", 00:06:09.970 "framework_get_governor", 00:06:09.970 "framework_get_scheduler", 00:06:09.970 "framework_set_scheduler", 00:06:09.970 "framework_get_reactors", 00:06:09.970 "thread_get_io_channels", 00:06:09.970 "thread_get_pollers", 00:06:09.970 "thread_get_stats", 00:06:09.970 "framework_monitor_context_switch", 00:06:09.970 "spdk_kill_instance", 00:06:09.970 "log_enable_timestamps", 00:06:09.970 "log_get_flags", 00:06:09.970 "log_clear_flag", 00:06:09.970 "log_set_flag", 00:06:09.970 "log_get_level", 00:06:09.970 "log_set_level", 00:06:09.970 "log_get_print_level", 00:06:09.970 "log_set_print_level", 00:06:09.970 "framework_enable_cpumask_locks", 00:06:09.970 "framework_disable_cpumask_locks", 00:06:09.970 "framework_wait_init", 00:06:09.970 "framework_start_init", 00:06:09.970 "scsi_get_devices", 00:06:09.970 "bdev_get_histogram", 00:06:09.970 "bdev_enable_histogram", 00:06:09.970 "bdev_set_qos_limit", 00:06:09.970 "bdev_set_qd_sampling_period", 00:06:09.970 "bdev_get_bdevs", 00:06:09.970 "bdev_reset_iostat", 00:06:09.970 "bdev_get_iostat", 00:06:09.970 "bdev_examine", 00:06:09.970 "bdev_wait_for_examine", 00:06:09.970 "bdev_set_options", 00:06:09.970 "accel_get_stats", 00:06:09.970 "accel_set_options", 00:06:09.970 "accel_set_driver", 00:06:09.970 "accel_crypto_key_destroy", 00:06:09.970 "accel_crypto_keys_get", 00:06:09.970 "accel_crypto_key_create", 00:06:09.970 "accel_assign_opc", 00:06:09.970 "accel_get_module_info", 00:06:09.970 "accel_get_opc_assignments", 00:06:09.970 "vmd_rescan", 00:06:09.970 "vmd_remove_device", 00:06:09.970 "vmd_enable", 00:06:09.970 "sock_get_default_impl", 00:06:09.970 "sock_set_default_impl", 00:06:09.970 "sock_impl_set_options", 00:06:09.970 "sock_impl_get_options", 00:06:09.970 "iobuf_get_stats", 00:06:09.970 "iobuf_set_options", 00:06:09.970 "keyring_get_keys", 00:06:09.970 "framework_get_pci_devices", 00:06:09.970 "framework_get_config", 00:06:09.970 "framework_get_subsystems", 00:06:09.970 "fsdev_set_opts", 00:06:09.970 "fsdev_get_opts", 00:06:09.970 "trace_get_info", 00:06:09.970 "trace_get_tpoint_group_mask", 00:06:09.970 "trace_disable_tpoint_group", 00:06:09.970 "trace_enable_tpoint_group", 00:06:09.970 "trace_clear_tpoint_mask", 00:06:09.970 "trace_set_tpoint_mask", 00:06:09.970 "notify_get_notifications", 00:06:09.970 "notify_get_types", 00:06:09.970 "spdk_get_version", 00:06:09.970 "rpc_get_methods" 00:06:09.970 ] 00:06:09.970 18:50:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:09.970 18:50:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.970 18:50:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.229 18:50:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:10.229 18:50:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59214 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59214 ']' 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59214 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59214 00:06:10.229 killing process with pid 59214 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59214' 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59214 00:06:10.229 18:50:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59214 00:06:12.756 ************************************ 00:06:12.756 END TEST spdkcli_tcp 00:06:12.756 ************************************ 00:06:12.756 00:06:12.756 real 0m3.865s 00:06:12.756 user 0m7.047s 00:06:12.756 sys 0m0.539s 00:06:12.756 18:50:43 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.756 18:50:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.756 18:50:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.756 18:50:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.756 18:50:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.756 18:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.756 ************************************ 00:06:12.756 START TEST dpdk_mem_utility 00:06:12.756 ************************************ 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.756 * Looking for test storage... 00:06:12.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.756 18:50:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.756 --rc genhtml_branch_coverage=1 00:06:12.756 --rc genhtml_function_coverage=1 00:06:12.756 --rc genhtml_legend=1 00:06:12.756 --rc geninfo_all_blocks=1 00:06:12.756 --rc geninfo_unexecuted_blocks=1 00:06:12.756 00:06:12.756 ' 00:06:12.756 18:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:12.756 18:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59331 00:06:12.756 18:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.756 18:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59331 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59331 ']' 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.756 18:50:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.756 [2024-11-26 18:50:43.695495] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:12.756 [2024-11-26 18:50:43.695863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59331 ] 00:06:12.756 [2024-11-26 18:50:43.883340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.015 [2024-11-26 18:50:43.988088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.951 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.951 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:13.951 18:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:13.951 18:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:13.951 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.951 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.951 { 00:06:13.951 "filename": "/tmp/spdk_mem_dump.txt" 00:06:13.951 } 00:06:13.951 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.951 18:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:13.951 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:13.951 1 heaps totaling size 824.000000 MiB 00:06:13.951 size: 824.000000 MiB heap id: 0 00:06:13.951 end heaps---------- 00:06:13.951 9 mempools totaling size 603.782043 MiB 00:06:13.951 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:13.951 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:13.951 size: 100.555481 MiB name: bdev_io_59331 00:06:13.951 size: 50.003479 MiB name: msgpool_59331 00:06:13.951 size: 36.509338 MiB name: fsdev_io_59331 00:06:13.951 size: 21.763794 MiB name: PDU_Pool 00:06:13.951 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:13.951 size: 4.133484 MiB name: evtpool_59331 00:06:13.951 size: 0.026123 MiB name: Session_Pool 00:06:13.951 end mempools------- 00:06:13.951 6 memzones totaling size 4.142822 MiB 00:06:13.951 size: 1.000366 MiB name: RG_ring_0_59331 00:06:13.951 size: 1.000366 MiB name: RG_ring_1_59331 00:06:13.951 size: 1.000366 MiB name: RG_ring_4_59331 00:06:13.951 size: 1.000366 MiB name: RG_ring_5_59331 00:06:13.951 size: 0.125366 MiB name: RG_ring_2_59331 00:06:13.951 size: 0.015991 MiB name: RG_ring_3_59331 00:06:13.951 end memzones------- 00:06:13.951 18:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:13.951 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:06:13.951 list of free elements. size: 16.781860 MiB 00:06:13.951 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:13.951 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:13.951 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:13.951 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:13.951 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:13.951 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:13.951 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:13.951 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:13.951 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:13.951 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:13.951 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:13.951 element at address: 0x20001b400000 with size: 0.563416 MiB 00:06:13.951 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:13.951 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:13.951 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:13.951 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:13.951 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:13.951 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:13.951 list of standard malloc elements. size: 199.287231 MiB 00:06:13.951 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:13.951 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:13.951 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:13.951 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:13.951 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:13.951 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:13.951 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:13.951 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:13.951 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:13.951 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:13.951 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:13.951 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:13.951 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:13.952 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:13.953 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:13.953 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:13.953 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:13.953 list of memzone associated elements. size: 607.930908 MiB 00:06:13.953 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:13.953 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:13.953 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:13.953 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:13.953 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:13.953 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59331_0 00:06:13.953 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:13.953 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59331_0 00:06:13.953 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:13.953 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59331_0 00:06:13.953 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:13.953 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:13.953 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:13.954 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:13.954 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:13.954 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59331_0 00:06:13.954 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:13.954 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59331 00:06:13.954 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:13.954 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59331 00:06:13.954 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:13.954 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:13.954 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:13.954 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:13.954 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:13.954 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:13.954 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:13.954 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:13.954 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:13.954 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59331 00:06:13.954 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:13.954 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59331 00:06:13.954 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:13.954 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59331 00:06:13.954 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:13.954 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59331 00:06:13.954 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:13.954 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59331 00:06:13.954 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:13.954 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59331 00:06:13.954 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:13.954 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:13.954 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:13.954 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:13.954 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:13.954 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:13.954 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:13.954 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59331 00:06:13.954 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:13.954 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59331 00:06:13.954 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:13.954 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:13.954 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:13.954 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:13.954 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:13.954 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59331 00:06:13.954 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:13.954 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:13.954 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:13.954 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59331 00:06:13.954 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:13.954 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59331 00:06:13.954 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:13.954 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59331 00:06:13.954 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:13.954 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:13.954 18:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:13.954 18:50:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59331 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59331 ']' 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59331 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59331 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59331' 00:06:13.954 killing process with pid 59331 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59331 00:06:13.954 18:50:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59331 00:06:16.483 00:06:16.483 real 0m3.731s 00:06:16.483 user 0m3.811s 00:06:16.483 sys 0m0.490s 00:06:16.483 ************************************ 00:06:16.483 END TEST dpdk_mem_utility 00:06:16.483 ************************************ 00:06:16.483 18:50:47 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.483 18:50:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.483 18:50:47 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.483 18:50:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.483 18:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.483 18:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.483 ************************************ 00:06:16.483 START TEST event 00:06:16.483 ************************************ 00:06:16.483 18:50:47 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.483 * Looking for test storage... 00:06:16.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.483 18:50:47 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.483 18:50:47 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.483 18:50:47 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.483 18:50:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.483 18:50:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.483 18:50:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.483 18:50:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.483 18:50:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.483 18:50:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.483 18:50:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.483 18:50:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.483 18:50:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.483 18:50:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.483 18:50:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.483 18:50:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.483 18:50:47 event -- scripts/common.sh@344 -- # case "$op" in 00:06:16.483 18:50:47 event -- scripts/common.sh@345 -- # : 1 00:06:16.483 18:50:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.483 18:50:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.483 18:50:47 event -- scripts/common.sh@365 -- # decimal 1 00:06:16.483 18:50:47 event -- scripts/common.sh@353 -- # local d=1 00:06:16.483 18:50:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.483 18:50:47 event -- scripts/common.sh@355 -- # echo 1 00:06:16.483 18:50:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.483 18:50:47 event -- scripts/common.sh@366 -- # decimal 2 00:06:16.483 18:50:47 event -- scripts/common.sh@353 -- # local d=2 00:06:16.483 18:50:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.483 18:50:47 event -- scripts/common.sh@355 -- # echo 2 00:06:16.483 18:50:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.483 18:50:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.483 18:50:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.483 18:50:47 event -- scripts/common.sh@368 -- # return 0 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.484 --rc genhtml_branch_coverage=1 00:06:16.484 --rc genhtml_function_coverage=1 00:06:16.484 --rc genhtml_legend=1 00:06:16.484 --rc geninfo_all_blocks=1 00:06:16.484 --rc geninfo_unexecuted_blocks=1 00:06:16.484 00:06:16.484 ' 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.484 --rc genhtml_branch_coverage=1 00:06:16.484 --rc genhtml_function_coverage=1 00:06:16.484 --rc genhtml_legend=1 00:06:16.484 --rc geninfo_all_blocks=1 00:06:16.484 --rc geninfo_unexecuted_blocks=1 00:06:16.484 00:06:16.484 ' 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.484 --rc genhtml_branch_coverage=1 00:06:16.484 --rc genhtml_function_coverage=1 00:06:16.484 --rc genhtml_legend=1 00:06:16.484 --rc geninfo_all_blocks=1 00:06:16.484 --rc geninfo_unexecuted_blocks=1 00:06:16.484 00:06:16.484 ' 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.484 --rc genhtml_branch_coverage=1 00:06:16.484 --rc genhtml_function_coverage=1 00:06:16.484 --rc genhtml_legend=1 00:06:16.484 --rc geninfo_all_blocks=1 00:06:16.484 --rc geninfo_unexecuted_blocks=1 00:06:16.484 00:06:16.484 ' 00:06:16.484 18:50:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:16.484 18:50:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.484 18:50:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:16.484 18:50:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.484 18:50:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.484 ************************************ 00:06:16.484 START TEST event_perf 00:06:16.484 ************************************ 00:06:16.484 18:50:47 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.484 Running I/O for 1 seconds...[2024-11-26 18:50:47.385626] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:16.484 [2024-11-26 18:50:47.385916] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:06:16.484 [2024-11-26 18:50:47.562654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.484 [2024-11-26 18:50:47.674651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.484 [2024-11-26 18:50:47.674765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.484 [2024-11-26 18:50:47.674831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.484 [2024-11-26 18:50:47.674831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.855 Running I/O for 1 seconds... 00:06:17.855 lcore 0: 164881 00:06:17.855 lcore 1: 164881 00:06:17.855 lcore 2: 164880 00:06:17.855 lcore 3: 164879 00:06:17.855 done. 00:06:17.855 00:06:17.855 real 0m1.571s 00:06:17.855 user 0m4.345s 00:06:17.855 sys 0m0.096s 00:06:17.855 18:50:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.855 18:50:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.855 ************************************ 00:06:17.855 END TEST event_perf 00:06:17.855 ************************************ 00:06:17.855 18:50:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:17.855 18:50:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:17.855 18:50:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.855 18:50:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.855 ************************************ 00:06:17.855 START TEST event_reactor 00:06:17.855 ************************************ 00:06:17.855 18:50:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:17.855 [2024-11-26 18:50:49.006813] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:17.855 [2024-11-26 18:50:49.007205] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59480 ] 00:06:18.113 [2024-11-26 18:50:49.207425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.113 [2024-11-26 18:50:49.315379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.487 test_start 00:06:19.487 oneshot 00:06:19.487 tick 100 00:06:19.487 tick 100 00:06:19.487 tick 250 00:06:19.487 tick 100 00:06:19.487 tick 100 00:06:19.487 tick 250 00:06:19.487 tick 100 00:06:19.487 tick 500 00:06:19.487 tick 100 00:06:19.487 tick 100 00:06:19.487 tick 250 00:06:19.487 tick 100 00:06:19.487 tick 100 00:06:19.487 test_end 00:06:19.487 ************************************ 00:06:19.487 END TEST event_reactor 00:06:19.487 ************************************ 00:06:19.487 00:06:19.487 real 0m1.588s 00:06:19.487 user 0m1.391s 00:06:19.487 sys 0m0.085s 00:06:19.487 18:50:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.487 18:50:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.487 18:50:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.487 18:50:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:19.487 18:50:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.487 18:50:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.487 ************************************ 00:06:19.487 START TEST event_reactor_perf 00:06:19.487 ************************************ 00:06:19.487 18:50:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.487 [2024-11-26 18:50:50.640566] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:19.487 [2024-11-26 18:50:50.640784] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:06:19.745 [2024-11-26 18:50:50.830819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.745 [2024-11-26 18:50:50.937325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.121 test_start 00:06:21.121 test_end 00:06:21.121 Performance: 266850 events per second 00:06:21.121 00:06:21.121 real 0m1.556s 00:06:21.121 user 0m1.346s 00:06:21.121 sys 0m0.099s 00:06:21.121 18:50:52 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.121 18:50:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.121 ************************************ 00:06:21.121 END TEST event_reactor_perf 00:06:21.121 ************************************ 00:06:21.121 18:50:52 event -- event/event.sh@49 -- # uname -s 00:06:21.121 18:50:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.121 18:50:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.121 18:50:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.121 18:50:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.121 18:50:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.121 ************************************ 00:06:21.121 START TEST event_scheduler 00:06:21.121 ************************************ 00:06:21.121 18:50:52 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.121 * Looking for test storage... 00:06:21.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:21.121 18:50:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.121 18:50:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.121 18:50:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.379 18:50:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.379 18:50:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:21.379 18:50:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.379 18:50:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.379 --rc genhtml_branch_coverage=1 00:06:21.379 --rc genhtml_function_coverage=1 00:06:21.379 --rc genhtml_legend=1 00:06:21.379 --rc geninfo_all_blocks=1 00:06:21.380 --rc geninfo_unexecuted_blocks=1 00:06:21.380 00:06:21.380 ' 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.380 --rc genhtml_branch_coverage=1 00:06:21.380 --rc genhtml_function_coverage=1 00:06:21.380 --rc genhtml_legend=1 00:06:21.380 --rc geninfo_all_blocks=1 00:06:21.380 --rc geninfo_unexecuted_blocks=1 00:06:21.380 00:06:21.380 ' 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.380 --rc genhtml_branch_coverage=1 00:06:21.380 --rc genhtml_function_coverage=1 00:06:21.380 --rc genhtml_legend=1 00:06:21.380 --rc geninfo_all_blocks=1 00:06:21.380 --rc geninfo_unexecuted_blocks=1 00:06:21.380 00:06:21.380 ' 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.380 --rc genhtml_branch_coverage=1 00:06:21.380 --rc genhtml_function_coverage=1 00:06:21.380 --rc genhtml_legend=1 00:06:21.380 --rc geninfo_all_blocks=1 00:06:21.380 --rc geninfo_unexecuted_blocks=1 00:06:21.380 00:06:21.380 ' 00:06:21.380 18:50:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.380 18:50:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59589 00:06:21.380 18:50:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.380 18:50:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.380 18:50:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59589 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.380 18:50:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.380 [2024-11-26 18:50:52.509039] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:21.380 [2024-11-26 18:50:52.509516] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:06:21.637 [2024-11-26 18:50:52.703490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.896 [2024-11-26 18:50:52.853259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.896 [2024-11-26 18:50:52.853344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.896 [2024-11-26 18:50:52.853423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.896 [2024-11-26 18:50:52.853423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:22.462 18:50:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.462 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.462 POWER: Cannot set governor of lcore 0 to performance 00:06:22.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.462 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.462 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.462 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.462 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:22.462 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:22.462 POWER: Unable to set Power Management Environment for lcore 0 00:06:22.462 [2024-11-26 18:50:53.557609] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:22.462 [2024-11-26 18:50:53.557760] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:22.462 [2024-11-26 18:50:53.557890] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:22.462 [2024-11-26 18:50:53.558049] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:22.462 [2024-11-26 18:50:53.558188] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:22.462 [2024-11-26 18:50:53.558338] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.462 18:50:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.462 18:50:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 [2024-11-26 18:50:53.864917] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:22.723 18:50:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:22.723 18:50:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.723 18:50:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 ************************************ 00:06:22.723 START TEST scheduler_create_thread 00:06:22.723 ************************************ 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 2 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 3 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 4 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 5 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 6 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.723 7 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.723 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.982 8 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.982 9 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.982 10 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.982 18:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.550 ************************************ 00:06:23.550 END TEST scheduler_create_thread 00:06:23.550 ************************************ 00:06:23.550 18:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.550 00:06:23.550 real 0m0.605s 00:06:23.550 user 0m0.016s 00:06:23.550 sys 0m0.006s 00:06:23.550 18:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.550 18:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.550 18:50:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:23.550 18:50:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59589 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59589 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59589 00:06:23.550 killing process with pid 59589 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59589' 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59589 00:06:23.550 18:50:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59589 00:06:23.808 [2024-11-26 18:50:54.957450] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.184 00:06:25.184 real 0m3.845s 00:06:25.184 user 0m7.706s 00:06:25.184 sys 0m0.440s 00:06:25.184 ************************************ 00:06:25.184 END TEST event_scheduler 00:06:25.184 ************************************ 00:06:25.184 18:50:56 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.184 18:50:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.184 18:50:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.184 18:50:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.184 18:50:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.184 18:50:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.184 18:50:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.184 ************************************ 00:06:25.184 START TEST app_repeat 00:06:25.184 ************************************ 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.184 Process app_repeat pid: 59681 00:06:25.184 spdk_app_start Round 0 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59681 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59681' 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.184 18:50:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59681 /var/tmp/spdk-nbd.sock 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59681 ']' 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.184 18:50:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.184 [2024-11-26 18:50:56.146519] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:25.184 [2024-11-26 18:50:56.146668] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:06:25.184 [2024-11-26 18:50:56.353147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.443 [2024-11-26 18:50:56.474986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.443 [2024-11-26 18:50:56.474993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.377 18:50:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.377 18:50:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.377 18:50:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.635 Malloc0 00:06:26.635 18:50:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.201 Malloc1 00:06:27.201 18:50:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.201 18:50:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.461 /dev/nbd0 00:06:27.461 18:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.461 18:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.461 1+0 records in 00:06:27.461 1+0 records out 00:06:27.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418427 s, 9.8 MB/s 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.461 18:50:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:27.461 18:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.461 18:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.461 18:50:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.775 /dev/nbd1 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.775 1+0 records in 00:06:27.775 1+0 records out 00:06:27.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326981 s, 12.5 MB/s 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.775 18:50:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.775 18:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.049 { 00:06:28.049 "nbd_device": "/dev/nbd0", 00:06:28.049 "bdev_name": "Malloc0" 00:06:28.049 }, 00:06:28.049 { 00:06:28.049 "nbd_device": "/dev/nbd1", 00:06:28.049 "bdev_name": "Malloc1" 00:06:28.049 } 00:06:28.049 ]' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.049 { 00:06:28.049 "nbd_device": "/dev/nbd0", 00:06:28.049 "bdev_name": "Malloc0" 00:06:28.049 }, 00:06:28.049 { 00:06:28.049 "nbd_device": "/dev/nbd1", 00:06:28.049 "bdev_name": "Malloc1" 00:06:28.049 } 00:06:28.049 ]' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.049 /dev/nbd1' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.049 /dev/nbd1' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.049 256+0 records in 00:06:28.049 256+0 records out 00:06:28.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647883 s, 162 MB/s 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.049 18:50:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.306 256+0 records in 00:06:28.306 256+0 records out 00:06:28.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248376 s, 42.2 MB/s 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.306 256+0 records in 00:06:28.306 256+0 records out 00:06:28.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296499 s, 35.4 MB/s 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.306 18:50:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.563 18:50:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.129 18:51:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.129 18:51:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.129 18:51:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.129 18:51:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.129 18:51:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.129 18:51:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.130 18:51:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.130 18:51:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.130 18:51:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.130 18:51:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.130 18:51:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.388 18:51:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.388 18:51:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.953 18:51:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.888 [2024-11-26 18:51:01.949422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.888 [2024-11-26 18:51:02.049980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.888 [2024-11-26 18:51:02.049995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.146 [2024-11-26 18:51:02.219461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.146 [2024-11-26 18:51:02.219570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.042 18:51:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.042 spdk_app_start Round 1 00:06:33.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.042 18:51:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:33.042 18:51:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59681 /var/tmp/spdk-nbd.sock 00:06:33.042 18:51:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59681 ']' 00:06:33.043 18:51:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.043 18:51:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.043 18:51:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.043 18:51:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.043 18:51:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.043 18:51:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.043 18:51:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.043 18:51:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.631 Malloc0 00:06:33.631 18:51:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.888 Malloc1 00:06:33.888 18:51:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.888 18:51:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.888 18:51:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.888 18:51:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.889 18:51:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.146 /dev/nbd0 00:06:34.146 18:51:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.146 18:51:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.146 1+0 records in 00:06:34.146 1+0 records out 00:06:34.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360043 s, 11.4 MB/s 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.146 18:51:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.147 18:51:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.147 18:51:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.147 18:51:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.147 18:51:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.147 18:51:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.147 18:51:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.404 /dev/nbd1 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.662 1+0 records in 00:06:34.662 1+0 records out 00:06:34.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357704 s, 11.5 MB/s 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.662 18:51:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.662 18:51:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.920 { 00:06:34.920 "nbd_device": "/dev/nbd0", 00:06:34.920 "bdev_name": "Malloc0" 00:06:34.920 }, 00:06:34.920 { 00:06:34.920 "nbd_device": "/dev/nbd1", 00:06:34.920 "bdev_name": "Malloc1" 00:06:34.920 } 00:06:34.920 ]' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.920 { 00:06:34.920 "nbd_device": "/dev/nbd0", 00:06:34.920 "bdev_name": "Malloc0" 00:06:34.920 }, 00:06:34.920 { 00:06:34.920 "nbd_device": "/dev/nbd1", 00:06:34.920 "bdev_name": "Malloc1" 00:06:34.920 } 00:06:34.920 ]' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.920 /dev/nbd1' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.920 /dev/nbd1' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.920 18:51:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.920 256+0 records in 00:06:34.920 256+0 records out 00:06:34.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705428 s, 149 MB/s 00:06:34.921 18:51:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.921 18:51:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.921 256+0 records in 00:06:34.921 256+0 records out 00:06:34.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301577 s, 34.8 MB/s 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.921 256+0 records in 00:06:34.921 256+0 records out 00:06:34.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343042 s, 30.6 MB/s 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.921 18:51:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.486 18:51:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.744 18:51:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.744 18:51:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.744 18:51:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.001 18:51:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.001 18:51:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.566 18:51:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.498 [2024-11-26 18:51:08.556577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.498 [2024-11-26 18:51:08.658738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.498 [2024-11-26 18:51:08.658740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.756 [2024-11-26 18:51:08.827517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.756 [2024-11-26 18:51:08.827597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.656 spdk_app_start Round 2 00:06:39.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.656 18:51:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.656 18:51:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:39.656 18:51:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59681 /var/tmp/spdk-nbd.sock 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59681 ']' 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.656 18:51:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:39.656 18:51:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.221 Malloc0 00:06:40.221 18:51:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.480 Malloc1 00:06:40.480 18:51:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.480 18:51:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.047 /dev/nbd0 00:06:41.047 18:51:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.047 18:51:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.047 18:51:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.047 1+0 records in 00:06:41.047 1+0 records out 00:06:41.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036036 s, 11.4 MB/s 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.047 18:51:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.047 18:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.047 18:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.047 18:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.306 /dev/nbd1 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.306 1+0 records in 00:06:41.306 1+0 records out 00:06:41.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422712 s, 9.7 MB/s 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.306 18:51:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.306 18:51:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.564 { 00:06:41.564 "nbd_device": "/dev/nbd0", 00:06:41.564 "bdev_name": "Malloc0" 00:06:41.564 }, 00:06:41.564 { 00:06:41.564 "nbd_device": "/dev/nbd1", 00:06:41.564 "bdev_name": "Malloc1" 00:06:41.564 } 00:06:41.564 ]' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.564 { 00:06:41.564 "nbd_device": "/dev/nbd0", 00:06:41.564 "bdev_name": "Malloc0" 00:06:41.564 }, 00:06:41.564 { 00:06:41.564 "nbd_device": "/dev/nbd1", 00:06:41.564 "bdev_name": "Malloc1" 00:06:41.564 } 00:06:41.564 ]' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.564 /dev/nbd1' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.564 /dev/nbd1' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.564 256+0 records in 00:06:41.564 256+0 records out 00:06:41.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694851 s, 151 MB/s 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.564 18:51:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.821 256+0 records in 00:06:41.821 256+0 records out 00:06:41.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281261 s, 37.3 MB/s 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.821 256+0 records in 00:06:41.821 256+0 records out 00:06:41.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339845 s, 30.9 MB/s 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.821 18:51:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.078 18:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.643 18:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.901 18:51:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.901 18:51:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.469 18:51:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.403 [2024-11-26 18:51:15.413956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.403 [2024-11-26 18:51:15.514695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.403 [2024-11-26 18:51:15.514712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.661 [2024-11-26 18:51:15.682409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.661 [2024-11-26 18:51:15.682525] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.563 18:51:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59681 /var/tmp/spdk-nbd.sock 00:06:46.563 18:51:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59681 ']' 00:06:46.563 18:51:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.563 18:51:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.563 18:51:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.563 18:51:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.563 18:51:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.821 18:51:17 event.app_repeat -- event/event.sh@39 -- # killprocess 59681 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59681 ']' 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59681 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59681 00:06:46.821 killing process with pid 59681 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59681' 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59681 00:06:46.821 18:51:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59681 00:06:47.756 spdk_app_start is called in Round 0. 00:06:47.756 Shutdown signal received, stop current app iteration 00:06:47.756 Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 reinitialization... 00:06:47.756 spdk_app_start is called in Round 1. 00:06:47.756 Shutdown signal received, stop current app iteration 00:06:47.756 Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 reinitialization... 00:06:47.757 spdk_app_start is called in Round 2. 00:06:47.757 Shutdown signal received, stop current app iteration 00:06:47.757 Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 reinitialization... 00:06:47.757 spdk_app_start is called in Round 3. 00:06:47.757 Shutdown signal received, stop current app iteration 00:06:47.757 ************************************ 00:06:47.757 END TEST app_repeat 00:06:47.757 ************************************ 00:06:47.757 18:51:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:47.757 18:51:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:47.757 00:06:47.757 real 0m22.734s 00:06:47.757 user 0m51.374s 00:06:47.757 sys 0m2.947s 00:06:47.757 18:51:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.757 18:51:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.757 18:51:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:47.757 18:51:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.757 18:51:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.757 18:51:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.757 18:51:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.757 ************************************ 00:06:47.757 START TEST cpu_locks 00:06:47.757 ************************************ 00:06:47.757 18:51:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.757 * Looking for test storage... 00:06:47.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:47.757 18:51:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.757 18:51:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.757 18:51:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.078 18:51:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.078 --rc genhtml_branch_coverage=1 00:06:48.078 --rc genhtml_function_coverage=1 00:06:48.078 --rc genhtml_legend=1 00:06:48.078 --rc geninfo_all_blocks=1 00:06:48.078 --rc geninfo_unexecuted_blocks=1 00:06:48.078 00:06:48.078 ' 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.078 --rc genhtml_branch_coverage=1 00:06:48.078 --rc genhtml_function_coverage=1 00:06:48.078 --rc genhtml_legend=1 00:06:48.078 --rc geninfo_all_blocks=1 00:06:48.078 --rc geninfo_unexecuted_blocks=1 00:06:48.078 00:06:48.078 ' 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.078 --rc genhtml_branch_coverage=1 00:06:48.078 --rc genhtml_function_coverage=1 00:06:48.078 --rc genhtml_legend=1 00:06:48.078 --rc geninfo_all_blocks=1 00:06:48.078 --rc geninfo_unexecuted_blocks=1 00:06:48.078 00:06:48.078 ' 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.078 --rc genhtml_branch_coverage=1 00:06:48.078 --rc genhtml_function_coverage=1 00:06:48.078 --rc genhtml_legend=1 00:06:48.078 --rc geninfo_all_blocks=1 00:06:48.078 --rc geninfo_unexecuted_blocks=1 00:06:48.078 00:06:48.078 ' 00:06:48.078 18:51:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:48.078 18:51:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:48.078 18:51:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:48.078 18:51:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.078 18:51:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.078 ************************************ 00:06:48.078 START TEST default_locks 00:06:48.078 ************************************ 00:06:48.078 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:48.078 18:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60168 00:06:48.078 18:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60168 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60168 ']' 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.079 18:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.079 [2024-11-26 18:51:19.172436] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:48.079 [2024-11-26 18:51:19.172603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60168 ] 00:06:48.336 [2024-11-26 18:51:19.347972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.336 [2024-11-26 18:51:19.452826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.269 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.269 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:49.269 18:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60168 00:06:49.269 18:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60168 00:06:49.269 18:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60168 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60168 ']' 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60168 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60168 00:06:49.526 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.783 killing process with pid 60168 00:06:49.783 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.783 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60168' 00:06:49.783 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60168 00:06:49.783 18:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60168 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60168 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60168 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60168 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60168 ']' 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.682 ERROR: process (pid: 60168) is no longer running 00:06:51.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60168) - No such process 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:51.682 00:06:51.682 real 0m3.779s 00:06:51.682 user 0m3.934s 00:06:51.682 sys 0m0.628s 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.682 ************************************ 00:06:51.682 END TEST default_locks 00:06:51.682 ************************************ 00:06:51.682 18:51:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.682 18:51:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:51.682 18:51:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.682 18:51:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.682 18:51:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.682 ************************************ 00:06:51.682 START TEST default_locks_via_rpc 00:06:51.682 ************************************ 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60237 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60237 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60237 ']' 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.682 18:51:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.940 [2024-11-26 18:51:23.037125] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:51.940 [2024-11-26 18:51:23.037376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:06:52.197 [2024-11-26 18:51:23.234228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.197 [2024-11-26 18:51:23.339877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60237 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60237 00:06:53.179 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.438 18:51:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60237 00:06:53.438 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60237 ']' 00:06:53.438 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60237 00:06:53.438 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60237 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.695 killing process with pid 60237 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60237' 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60237 00:06:53.695 18:51:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60237 00:06:56.219 00:06:56.219 real 0m3.979s 00:06:56.219 user 0m4.146s 00:06:56.219 sys 0m0.653s 00:06:56.219 18:51:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.219 18:51:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.219 ************************************ 00:06:56.219 END TEST default_locks_via_rpc 00:06:56.219 ************************************ 00:06:56.219 18:51:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:56.219 18:51:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.219 18:51:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.219 18:51:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.219 ************************************ 00:06:56.219 START TEST non_locking_app_on_locked_coremask 00:06:56.219 ************************************ 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60311 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60311 /var/tmp/spdk.sock 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60311 ']' 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.219 18:51:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.219 [2024-11-26 18:51:27.018230] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:56.219 [2024-11-26 18:51:27.018902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60311 ] 00:06:56.219 [2024-11-26 18:51:27.247091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.219 [2024-11-26 18:51:27.370564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60327 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60327 /var/tmp/spdk2.sock 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60327 ']' 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.155 18:51:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.155 [2024-11-26 18:51:28.327814] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:06:57.155 [2024-11-26 18:51:28.328077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:06:57.413 [2024-11-26 18:51:28.544805] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.413 [2024-11-26 18:51:28.544885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.705 [2024-11-26 18:51:28.811159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.286 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.286 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.286 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60311 00:07:00.286 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60311 00:07:00.286 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60311 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60311 ']' 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60311 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60311 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.853 killing process with pid 60311 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60311' 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60311 00:07:00.853 18:51:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60311 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60327 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60327 ']' 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60327 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60327 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60327' 00:07:05.097 killing process with pid 60327 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60327 00:07:05.097 18:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60327 00:07:07.624 00:07:07.624 real 0m11.532s 00:07:07.624 user 0m12.429s 00:07:07.624 sys 0m1.245s 00:07:07.624 18:51:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.624 18:51:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.624 ************************************ 00:07:07.624 END TEST non_locking_app_on_locked_coremask 00:07:07.624 ************************************ 00:07:07.624 18:51:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:07.624 18:51:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.624 18:51:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.624 18:51:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.624 ************************************ 00:07:07.624 START TEST locking_app_on_unlocked_coremask 00:07:07.624 ************************************ 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60481 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60481 /var/tmp/spdk.sock 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60481 ']' 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.624 18:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.624 [2024-11-26 18:51:38.587815] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:07.624 [2024-11-26 18:51:38.587979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60481 ] 00:07:07.624 [2024-11-26 18:51:38.763148] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.624 [2024-11-26 18:51:38.763232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.881 [2024-11-26 18:51:38.889472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60497 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60497 /var/tmp/spdk2.sock 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60497 ']' 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.813 18:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.813 [2024-11-26 18:51:39.835789] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:08.813 [2024-11-26 18:51:39.836024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60497 ] 00:07:09.070 [2024-11-26 18:51:40.061608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.328 [2024-11-26 18:51:40.340270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.853 18:51:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.853 18:51:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.853 18:51:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60497 00:07:11.853 18:51:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60497 00:07:11.853 18:51:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.787 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60481 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60481 ']' 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60481 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60481 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60481' 00:07:12.788 killing process with pid 60481 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60481 00:07:12.788 18:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60481 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60497 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60497 ']' 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60497 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60497 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.968 killing process with pid 60497 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60497' 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60497 00:07:16.968 18:51:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60497 00:07:18.870 00:07:18.870 real 0m11.575s 00:07:18.870 user 0m12.735s 00:07:18.870 sys 0m1.316s 00:07:18.871 18:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.871 ************************************ 00:07:18.871 END TEST locking_app_on_unlocked_coremask 00:07:18.871 ************************************ 00:07:18.871 18:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.129 18:51:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:19.129 18:51:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.129 18:51:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.129 18:51:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.129 ************************************ 00:07:19.129 START TEST locking_app_on_locked_coremask 00:07:19.129 ************************************ 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60645 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60645 /var/tmp/spdk.sock 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60645 ']' 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.129 18:51:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.129 [2024-11-26 18:51:50.210304] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:19.129 [2024-11-26 18:51:50.210507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60645 ] 00:07:19.387 [2024-11-26 18:51:50.428839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.388 [2024-11-26 18:51:50.553718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.320 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.320 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.320 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60661 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60661 /var/tmp/spdk2.sock 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60661 /var/tmp/spdk2.sock 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:20.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60661 /var/tmp/spdk2.sock 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60661 ']' 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.321 18:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.321 [2024-11-26 18:51:51.418574] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:20.321 [2024-11-26 18:51:51.418728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60661 ] 00:07:20.579 [2024-11-26 18:51:51.610647] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60645 has claimed it. 00:07:20.579 [2024-11-26 18:51:51.610731] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.147 ERROR: process (pid: 60661) is no longer running 00:07:21.147 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60661) - No such process 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60645 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.147 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60645 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60645 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60645 ']' 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60645 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60645 00:07:21.406 killing process with pid 60645 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60645' 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60645 00:07:21.406 18:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60645 00:07:23.936 00:07:23.936 real 0m4.625s 00:07:23.936 user 0m5.097s 00:07:23.936 sys 0m0.752s 00:07:23.936 ************************************ 00:07:23.936 END TEST locking_app_on_locked_coremask 00:07:23.936 ************************************ 00:07:23.936 18:51:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.936 18:51:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.936 18:51:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:23.936 18:51:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.936 18:51:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.936 18:51:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.936 ************************************ 00:07:23.936 START TEST locking_overlapped_coremask 00:07:23.936 ************************************ 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60736 00:07:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60736 /var/tmp/spdk.sock 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60736 ']' 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.936 18:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.936 [2024-11-26 18:51:54.911683] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:23.936 [2024-11-26 18:51:54.912046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60736 ] 00:07:23.936 [2024-11-26 18:51:55.089079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.215 [2024-11-26 18:51:55.233668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.215 [2024-11-26 18:51:55.233773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.215 [2024-11-26 18:51:55.233773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.148 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.148 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.148 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60754 00:07:25.148 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60754 /var/tmp/spdk2.sock 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60754 /var/tmp/spdk2.sock 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60754 /var/tmp/spdk2.sock 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60754 ']' 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.149 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.149 [2024-11-26 18:51:56.118824] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:25.149 [2024-11-26 18:51:56.118974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60754 ] 00:07:25.149 [2024-11-26 18:51:56.324152] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60736 has claimed it. 00:07:25.149 [2024-11-26 18:51:56.324283] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:25.713 ERROR: process (pid: 60754) is no longer running 00:07:25.713 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60754) - No such process 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60736 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60736 ']' 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60736 00:07:25.713 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60736 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60736' 00:07:25.714 killing process with pid 60736 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60736 00:07:25.714 18:51:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60736 00:07:28.239 ************************************ 00:07:28.239 END TEST locking_overlapped_coremask 00:07:28.239 ************************************ 00:07:28.239 00:07:28.239 real 0m4.239s 00:07:28.239 user 0m11.612s 00:07:28.239 sys 0m0.547s 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.239 18:51:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:28.239 18:51:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.239 18:51:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.239 18:51:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.239 ************************************ 00:07:28.239 START TEST locking_overlapped_coremask_via_rpc 00:07:28.239 ************************************ 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:28.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60818 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60818 /var/tmp/spdk.sock 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60818 ']' 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.239 18:51:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.239 [2024-11-26 18:51:59.152947] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:28.239 [2024-11-26 18:51:59.153098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60818 ] 00:07:28.239 [2024-11-26 18:51:59.346252] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.239 [2024-11-26 18:51:59.346329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.496 [2024-11-26 18:51:59.462684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.496 [2024-11-26 18:51:59.462829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.496 [2024-11-26 18:51:59.462831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.428 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.428 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.428 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60836 00:07:29.428 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:29.428 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60836 /var/tmp/spdk2.sock 00:07:29.429 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60836 ']' 00:07:29.429 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.429 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.429 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.429 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.429 18:52:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.429 [2024-11-26 18:52:00.439389] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:29.429 [2024-11-26 18:52:00.439867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60836 ] 00:07:29.727 [2024-11-26 18:52:00.654455] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.727 [2024-11-26 18:52:00.654551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.727 [2024-11-26 18:52:00.879025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.727 [2024-11-26 18:52:00.879117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.727 [2024-11-26 18:52:00.879127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 [2024-11-26 18:52:02.481463] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60818 has claimed it. 00:07:31.639 request: 00:07:31.639 { 00:07:31.639 "method": "framework_enable_cpumask_locks", 00:07:31.639 "req_id": 1 00:07:31.639 } 00:07:31.639 Got JSON-RPC error response 00:07:31.639 response: 00:07:31.639 { 00:07:31.639 "code": -32603, 00:07:31.639 "message": "Failed to claim CPU core: 2" 00:07:31.639 } 00:07:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60818 /var/tmp/spdk.sock 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60818 ']' 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.639 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60836 /var/tmp/spdk2.sock 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60836 ']' 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.897 18:52:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.154 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.154 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.154 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:32.154 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.154 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.154 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.154 00:07:32.154 real 0m4.243s 00:07:32.155 user 0m1.830s 00:07:32.155 sys 0m0.215s 00:07:32.155 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.155 18:52:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.155 ************************************ 00:07:32.155 END TEST locking_overlapped_coremask_via_rpc 00:07:32.155 ************************************ 00:07:32.155 18:52:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:32.155 18:52:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60818 ]] 00:07:32.155 18:52:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60818 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60818 ']' 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60818 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60818 00:07:32.155 killing process with pid 60818 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60818' 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60818 00:07:32.155 18:52:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60818 00:07:35.456 18:52:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60836 ]] 00:07:35.456 18:52:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60836 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60836 ']' 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60836 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60836 00:07:35.456 killing process with pid 60836 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60836' 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60836 00:07:35.456 18:52:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60836 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.985 Process with pid 60818 is not found 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60818 ]] 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60818 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60818 ']' 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60818 00:07:37.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60818) - No such process 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60818 is not found' 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60836 ]] 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60836 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60836 ']' 00:07:37.985 Process with pid 60836 is not found 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60836 00:07:37.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60836) - No such process 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60836 is not found' 00:07:37.985 18:52:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.985 00:07:37.985 real 0m49.900s 00:07:37.985 user 1m29.204s 00:07:37.985 sys 0m6.307s 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.985 ************************************ 00:07:37.985 END TEST cpu_locks 00:07:37.985 ************************************ 00:07:37.985 18:52:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.985 ************************************ 00:07:37.985 END TEST event 00:07:37.985 ************************************ 00:07:37.985 00:07:37.985 real 1m21.638s 00:07:37.985 user 2m35.572s 00:07:37.985 sys 0m10.204s 00:07:37.985 18:52:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.985 18:52:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.985 18:52:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:37.985 18:52:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.985 18:52:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.985 18:52:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.985 ************************************ 00:07:37.985 START TEST thread 00:07:37.985 ************************************ 00:07:37.985 18:52:08 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:37.985 * Looking for test storage... 00:07:37.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:37.985 18:52:08 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.985 18:52:08 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.985 18:52:08 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.985 18:52:08 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.985 18:52:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.985 18:52:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.985 18:52:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.985 18:52:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.985 18:52:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.985 18:52:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.985 18:52:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.985 18:52:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.985 18:52:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.985 18:52:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.985 18:52:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.985 18:52:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:37.985 18:52:08 thread -- scripts/common.sh@345 -- # : 1 00:07:37.985 18:52:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.985 18:52:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.985 18:52:08 thread -- scripts/common.sh@365 -- # decimal 1 00:07:37.985 18:52:08 thread -- scripts/common.sh@353 -- # local d=1 00:07:37.985 18:52:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.985 18:52:08 thread -- scripts/common.sh@355 -- # echo 1 00:07:37.985 18:52:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.985 18:52:08 thread -- scripts/common.sh@366 -- # decimal 2 00:07:37.985 18:52:08 thread -- scripts/common.sh@353 -- # local d=2 00:07:37.985 18:52:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.985 18:52:09 thread -- scripts/common.sh@355 -- # echo 2 00:07:37.985 18:52:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.985 18:52:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.985 18:52:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.985 18:52:09 thread -- scripts/common.sh@368 -- # return 0 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.985 --rc genhtml_branch_coverage=1 00:07:37.985 --rc genhtml_function_coverage=1 00:07:37.985 --rc genhtml_legend=1 00:07:37.985 --rc geninfo_all_blocks=1 00:07:37.985 --rc geninfo_unexecuted_blocks=1 00:07:37.985 00:07:37.985 ' 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.985 --rc genhtml_branch_coverage=1 00:07:37.985 --rc genhtml_function_coverage=1 00:07:37.985 --rc genhtml_legend=1 00:07:37.985 --rc geninfo_all_blocks=1 00:07:37.985 --rc geninfo_unexecuted_blocks=1 00:07:37.985 00:07:37.985 ' 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.985 --rc genhtml_branch_coverage=1 00:07:37.985 --rc genhtml_function_coverage=1 00:07:37.985 --rc genhtml_legend=1 00:07:37.985 --rc geninfo_all_blocks=1 00:07:37.985 --rc geninfo_unexecuted_blocks=1 00:07:37.985 00:07:37.985 ' 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.985 --rc genhtml_branch_coverage=1 00:07:37.985 --rc genhtml_function_coverage=1 00:07:37.985 --rc genhtml_legend=1 00:07:37.985 --rc geninfo_all_blocks=1 00:07:37.985 --rc geninfo_unexecuted_blocks=1 00:07:37.985 00:07:37.985 ' 00:07:37.985 18:52:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:37.985 18:52:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.986 18:52:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.986 ************************************ 00:07:37.986 START TEST thread_poller_perf 00:07:37.986 ************************************ 00:07:37.986 18:52:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.986 [2024-11-26 18:52:09.051042] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:37.986 [2024-11-26 18:52:09.051382] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:07:38.241 [2024-11-26 18:52:09.278363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.241 [2024-11-26 18:52:09.403519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.241 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:39.634 [2024-11-26T18:52:10.849Z] ====================================== 00:07:39.634 [2024-11-26T18:52:10.849Z] busy:2213523718 (cyc) 00:07:39.634 [2024-11-26T18:52:10.849Z] total_run_count: 267000 00:07:39.634 [2024-11-26T18:52:10.849Z] tsc_hz: 2200000000 (cyc) 00:07:39.634 [2024-11-26T18:52:10.849Z] ====================================== 00:07:39.634 [2024-11-26T18:52:10.849Z] poller_cost: 8290 (cyc), 3768 (nsec) 00:07:39.634 00:07:39.634 real 0m1.635s 00:07:39.634 user 0m1.440s 00:07:39.634 sys 0m0.081s 00:07:39.634 18:52:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.634 18:52:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.634 ************************************ 00:07:39.634 END TEST thread_poller_perf 00:07:39.634 ************************************ 00:07:39.634 18:52:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:39.634 18:52:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:39.634 18:52:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.634 18:52:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.634 ************************************ 00:07:39.634 START TEST thread_poller_perf 00:07:39.634 ************************************ 00:07:39.634 18:52:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:39.634 [2024-11-26 18:52:10.743399] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:39.634 [2024-11-26 18:52:10.743633] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:07:39.892 [2024-11-26 18:52:10.947455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.892 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:39.892 [2024-11-26 18:52:11.078426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.266 [2024-11-26T18:52:12.481Z] ====================================== 00:07:41.266 [2024-11-26T18:52:12.481Z] busy:2204668097 (cyc) 00:07:41.266 [2024-11-26T18:52:12.481Z] total_run_count: 3326000 00:07:41.266 [2024-11-26T18:52:12.481Z] tsc_hz: 2200000000 (cyc) 00:07:41.266 [2024-11-26T18:52:12.481Z] ====================================== 00:07:41.266 [2024-11-26T18:52:12.481Z] poller_cost: 662 (cyc), 300 (nsec) 00:07:41.266 00:07:41.266 real 0m1.626s 00:07:41.266 user 0m1.409s 00:07:41.266 sys 0m0.103s 00:07:41.266 ************************************ 00:07:41.266 END TEST thread_poller_perf 00:07:41.266 ************************************ 00:07:41.266 18:52:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.266 18:52:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:41.266 18:52:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:41.266 ************************************ 00:07:41.266 END TEST thread 00:07:41.266 ************************************ 00:07:41.266 00:07:41.266 real 0m3.502s 00:07:41.266 user 0m2.975s 00:07:41.266 sys 0m0.302s 00:07:41.266 18:52:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.266 18:52:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.266 18:52:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:41.266 18:52:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.266 18:52:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.266 18:52:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.266 18:52:12 -- common/autotest_common.sh@10 -- # set +x 00:07:41.266 ************************************ 00:07:41.266 START TEST app_cmdline 00:07:41.266 ************************************ 00:07:41.266 18:52:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.266 * Looking for test storage... 00:07:41.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.266 18:52:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.266 18:52:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.266 18:52:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.524 18:52:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.524 18:52:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:41.524 18:52:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.524 18:52:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.524 --rc genhtml_branch_coverage=1 00:07:41.524 --rc genhtml_function_coverage=1 00:07:41.524 --rc genhtml_legend=1 00:07:41.524 --rc geninfo_all_blocks=1 00:07:41.524 --rc geninfo_unexecuted_blocks=1 00:07:41.524 00:07:41.524 ' 00:07:41.524 18:52:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.524 --rc genhtml_branch_coverage=1 00:07:41.524 --rc genhtml_function_coverage=1 00:07:41.524 --rc genhtml_legend=1 00:07:41.524 --rc geninfo_all_blocks=1 00:07:41.524 --rc geninfo_unexecuted_blocks=1 00:07:41.524 00:07:41.524 ' 00:07:41.524 18:52:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.524 --rc genhtml_branch_coverage=1 00:07:41.524 --rc genhtml_function_coverage=1 00:07:41.524 --rc genhtml_legend=1 00:07:41.524 --rc geninfo_all_blocks=1 00:07:41.524 --rc geninfo_unexecuted_blocks=1 00:07:41.525 00:07:41.525 ' 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.525 --rc genhtml_branch_coverage=1 00:07:41.525 --rc genhtml_function_coverage=1 00:07:41.525 --rc genhtml_legend=1 00:07:41.525 --rc geninfo_all_blocks=1 00:07:41.525 --rc geninfo_unexecuted_blocks=1 00:07:41.525 00:07:41.525 ' 00:07:41.525 18:52:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.525 18:52:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61155 00:07:41.525 18:52:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.525 18:52:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61155 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61155 ']' 00:07:41.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.525 18:52:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.525 [2024-11-26 18:52:12.699757] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:41.525 [2024-11-26 18:52:12.700208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:07:41.783 [2024-11-26 18:52:12.897444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.040 [2024-11-26 18:52:13.027104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.972 18:52:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.972 18:52:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:42.972 18:52:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:43.230 { 00:07:43.230 "version": "SPDK v25.01-pre git sha1 baa2dd0a5", 00:07:43.230 "fields": { 00:07:43.230 "major": 25, 00:07:43.230 "minor": 1, 00:07:43.230 "patch": 0, 00:07:43.230 "suffix": "-pre", 00:07:43.230 "commit": "baa2dd0a5" 00:07:43.230 } 00:07:43.230 } 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:43.230 18:52:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:43.230 18:52:14 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.546 request: 00:07:43.546 { 00:07:43.546 "method": "env_dpdk_get_mem_stats", 00:07:43.546 "req_id": 1 00:07:43.546 } 00:07:43.546 Got JSON-RPC error response 00:07:43.546 response: 00:07:43.546 { 00:07:43.546 "code": -32601, 00:07:43.546 "message": "Method not found" 00:07:43.546 } 00:07:43.546 18:52:14 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:43.546 18:52:14 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.546 18:52:14 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.546 18:52:14 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.546 18:52:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61155 00:07:43.546 18:52:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61155 ']' 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61155 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61155 00:07:43.547 killing process with pid 61155 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61155' 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@973 -- # kill 61155 00:07:43.547 18:52:14 app_cmdline -- common/autotest_common.sh@978 -- # wait 61155 00:07:46.074 ************************************ 00:07:46.074 END TEST app_cmdline 00:07:46.074 ************************************ 00:07:46.074 00:07:46.074 real 0m4.416s 00:07:46.074 user 0m5.188s 00:07:46.074 sys 0m0.539s 00:07:46.074 18:52:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.074 18:52:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.074 18:52:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:46.074 18:52:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.074 18:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.074 18:52:16 -- common/autotest_common.sh@10 -- # set +x 00:07:46.074 ************************************ 00:07:46.074 START TEST version 00:07:46.074 ************************************ 00:07:46.074 18:52:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:46.074 * Looking for test storage... 00:07:46.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:46.074 18:52:16 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.074 18:52:16 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.074 18:52:16 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.074 18:52:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.074 18:52:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.074 18:52:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.074 18:52:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.074 18:52:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.074 18:52:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.074 18:52:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.074 18:52:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.074 18:52:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.074 18:52:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.074 18:52:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.074 18:52:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.074 18:52:17 version -- scripts/common.sh@344 -- # case "$op" in 00:07:46.074 18:52:17 version -- scripts/common.sh@345 -- # : 1 00:07:46.074 18:52:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.074 18:52:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.074 18:52:17 version -- scripts/common.sh@365 -- # decimal 1 00:07:46.074 18:52:17 version -- scripts/common.sh@353 -- # local d=1 00:07:46.074 18:52:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.074 18:52:17 version -- scripts/common.sh@355 -- # echo 1 00:07:46.074 18:52:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.074 18:52:17 version -- scripts/common.sh@366 -- # decimal 2 00:07:46.074 18:52:17 version -- scripts/common.sh@353 -- # local d=2 00:07:46.074 18:52:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.074 18:52:17 version -- scripts/common.sh@355 -- # echo 2 00:07:46.074 18:52:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.074 18:52:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.074 18:52:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.074 18:52:17 version -- scripts/common.sh@368 -- # return 0 00:07:46.074 18:52:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.074 18:52:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.074 --rc genhtml_branch_coverage=1 00:07:46.074 --rc genhtml_function_coverage=1 00:07:46.074 --rc genhtml_legend=1 00:07:46.074 --rc geninfo_all_blocks=1 00:07:46.074 --rc geninfo_unexecuted_blocks=1 00:07:46.074 00:07:46.074 ' 00:07:46.074 18:52:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.074 --rc genhtml_branch_coverage=1 00:07:46.074 --rc genhtml_function_coverage=1 00:07:46.074 --rc genhtml_legend=1 00:07:46.074 --rc geninfo_all_blocks=1 00:07:46.074 --rc geninfo_unexecuted_blocks=1 00:07:46.074 00:07:46.074 ' 00:07:46.074 18:52:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.074 --rc genhtml_branch_coverage=1 00:07:46.074 --rc genhtml_function_coverage=1 00:07:46.074 --rc genhtml_legend=1 00:07:46.074 --rc geninfo_all_blocks=1 00:07:46.074 --rc geninfo_unexecuted_blocks=1 00:07:46.074 00:07:46.074 ' 00:07:46.074 18:52:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.074 --rc genhtml_branch_coverage=1 00:07:46.074 --rc genhtml_function_coverage=1 00:07:46.074 --rc genhtml_legend=1 00:07:46.074 --rc geninfo_all_blocks=1 00:07:46.074 --rc geninfo_unexecuted_blocks=1 00:07:46.074 00:07:46.074 ' 00:07:46.074 18:52:17 version -- app/version.sh@17 -- # get_header_version major 00:07:46.074 18:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # cut -f2 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.074 18:52:17 version -- app/version.sh@17 -- # major=25 00:07:46.074 18:52:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:46.074 18:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # cut -f2 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.074 18:52:17 version -- app/version.sh@18 -- # minor=1 00:07:46.074 18:52:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:46.074 18:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # cut -f2 00:07:46.074 18:52:17 version -- app/version.sh@19 -- # patch=0 00:07:46.074 18:52:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:46.074 18:52:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # cut -f2 00:07:46.074 18:52:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:46.074 18:52:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:46.074 18:52:17 version -- app/version.sh@22 -- # version=25.1 00:07:46.074 18:52:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.074 18:52:17 version -- app/version.sh@28 -- # version=25.1rc0 00:07:46.075 18:52:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:46.075 18:52:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.075 18:52:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:46.075 18:52:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:46.075 ************************************ 00:07:46.075 END TEST version 00:07:46.075 ************************************ 00:07:46.075 00:07:46.075 real 0m0.243s 00:07:46.075 user 0m0.171s 00:07:46.075 sys 0m0.105s 00:07:46.075 18:52:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.075 18:52:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:46.075 18:52:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:46.075 18:52:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:46.075 18:52:17 -- spdk/autotest.sh@194 -- # uname -s 00:07:46.075 18:52:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:46.075 18:52:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:46.075 18:52:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:46.075 18:52:17 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:46.075 18:52:17 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:46.075 18:52:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.075 18:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.075 18:52:17 -- common/autotest_common.sh@10 -- # set +x 00:07:46.075 ************************************ 00:07:46.075 START TEST blockdev_nvme 00:07:46.075 ************************************ 00:07:46.075 18:52:17 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:46.075 * Looking for test storage... 00:07:46.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:46.075 18:52:17 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.075 18:52:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.075 18:52:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.075 18:52:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.075 18:52:17 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:46.332 18:52:17 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.332 18:52:17 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.332 18:52:17 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.332 18:52:17 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.332 --rc genhtml_branch_coverage=1 00:07:46.332 --rc genhtml_function_coverage=1 00:07:46.332 --rc genhtml_legend=1 00:07:46.332 --rc geninfo_all_blocks=1 00:07:46.332 --rc geninfo_unexecuted_blocks=1 00:07:46.332 00:07:46.332 ' 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.332 --rc genhtml_branch_coverage=1 00:07:46.332 --rc genhtml_function_coverage=1 00:07:46.332 --rc genhtml_legend=1 00:07:46.332 --rc geninfo_all_blocks=1 00:07:46.332 --rc geninfo_unexecuted_blocks=1 00:07:46.332 00:07:46.332 ' 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.332 --rc genhtml_branch_coverage=1 00:07:46.332 --rc genhtml_function_coverage=1 00:07:46.332 --rc genhtml_legend=1 00:07:46.332 --rc geninfo_all_blocks=1 00:07:46.332 --rc geninfo_unexecuted_blocks=1 00:07:46.332 00:07:46.332 ' 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.332 --rc genhtml_branch_coverage=1 00:07:46.332 --rc genhtml_function_coverage=1 00:07:46.332 --rc genhtml_legend=1 00:07:46.332 --rc geninfo_all_blocks=1 00:07:46.332 --rc geninfo_unexecuted_blocks=1 00:07:46.332 00:07:46.332 ' 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:46.332 18:52:17 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61349 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61349 00:07:46.332 18:52:17 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61349 ']' 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.332 18:52:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.332 [2024-11-26 18:52:17.429775] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:46.332 [2024-11-26 18:52:17.430155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61349 ] 00:07:46.589 [2024-11-26 18:52:17.617135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.589 [2024-11-26 18:52:17.720624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.522 18:52:18 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.522 18:52:18 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:47.522 18:52:18 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:47.522 18:52:18 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:47.522 18:52:18 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:47.522 18:52:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:47.522 18:52:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.522 18:52:18 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:47.522 18:52:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.522 18:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 18:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 18:52:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:47.820 18:52:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "97f11910-f16d-4582-a67e-1122b7efeadc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "97f11910-f16d-4582-a67e-1122b7efeadc",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b9fd5ef3-bd62-4db4-96a8-06920627ffb1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b9fd5ef3-bd62-4db4-96a8-06920627ffb1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cec30c62-1b02-442b-8897-9cfac222ef20"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cec30c62-1b02-442b-8897-9cfac222ef20",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c041cce7-efa0-4993-859e-4097c274426a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c041cce7-efa0-4993-859e-4097c274426a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4274f786-a64f-4b6c-a230-4fab9ebf1f40"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4274f786-a64f-4b6c-a230-4fab9ebf1f40",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cefec117-ee5c-46b0-a375-0327685c3d8e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cefec117-ee5c-46b0-a375-0327685c3d8e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:48.085 18:52:19 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61349 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61349 ']' 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61349 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61349 00:07:48.085 killing process with pid 61349 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61349' 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61349 00:07:48.085 18:52:19 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61349 00:07:50.610 18:52:21 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:50.610 18:52:21 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:50.610 18:52:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:50.610 18:52:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.610 18:52:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.610 ************************************ 00:07:50.610 START TEST bdev_hello_world 00:07:50.610 ************************************ 00:07:50.610 18:52:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:50.610 [2024-11-26 18:52:21.374152] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:50.610 [2024-11-26 18:52:21.374354] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61444 ] 00:07:50.610 [2024-11-26 18:52:21.551415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.610 [2024-11-26 18:52:21.658825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.176 [2024-11-26 18:52:22.292506] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:51.176 [2024-11-26 18:52:22.292567] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:51.176 [2024-11-26 18:52:22.292596] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:51.176 [2024-11-26 18:52:22.295645] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:51.176 [2024-11-26 18:52:22.296025] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:51.176 [2024-11-26 18:52:22.296059] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:51.176 [2024-11-26 18:52:22.296274] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:51.176 00:07:51.176 [2024-11-26 18:52:22.296308] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:52.110 ************************************ 00:07:52.110 END TEST bdev_hello_world 00:07:52.110 ************************************ 00:07:52.110 00:07:52.110 real 0m2.006s 00:07:52.110 user 0m1.675s 00:07:52.110 sys 0m0.220s 00:07:52.110 18:52:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.110 18:52:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:52.110 18:52:23 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:52.110 18:52:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.110 18:52:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.110 18:52:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.369 ************************************ 00:07:52.369 START TEST bdev_bounds 00:07:52.369 ************************************ 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:52.369 Process bdevio pid: 61486 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61486 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61486' 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61486 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61486 ']' 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.369 18:52:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:52.369 [2024-11-26 18:52:23.426545] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:52.369 [2024-11-26 18:52:23.426730] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61486 ] 00:07:52.636 [2024-11-26 18:52:23.619531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.636 [2024-11-26 18:52:23.748441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.636 [2024-11-26 18:52:23.748551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.636 [2024-11-26 18:52:23.748562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.256 18:52:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.256 18:52:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:53.256 18:52:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:53.514 I/O targets: 00:07:53.514 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:53.514 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:53.514 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:53.514 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:53.514 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:53.514 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:53.514 00:07:53.514 00:07:53.514 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.514 http://cunit.sourceforge.net/ 00:07:53.514 00:07:53.514 00:07:53.514 Suite: bdevio tests on: Nvme3n1 00:07:53.514 Test: blockdev write read block ...passed 00:07:53.514 Test: blockdev write zeroes read block ...passed 00:07:53.514 Test: blockdev write zeroes read no split ...passed 00:07:53.514 Test: blockdev write zeroes read split ...passed 00:07:53.514 Test: blockdev write zeroes read split partial ...passed 00:07:53.514 Test: blockdev reset ...[2024-11-26 18:52:24.662109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:53.514 passed 00:07:53.514 Test: blockdev write read 8 blocks ...[2024-11-26 18:52:24.666146] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:53.514 passed 00:07:53.514 Test: blockdev write read size > 128k ...passed 00:07:53.514 Test: blockdev write read invalid size ...passed 00:07:53.515 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.515 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.515 Test: blockdev write read max offset ...passed 00:07:53.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.515 Test: blockdev writev readv 8 blocks ...passed 00:07:53.515 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.515 Test: blockdev writev readv block ...passed 00:07:53.515 Test: blockdev writev readv size > 128k ...passed 00:07:53.515 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.515 Test: blockdev comparev and writev ...[2024-11-26 18:52:24.673402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:53.515 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2be40a000 len:0x1000 00:07:53.515 [2024-11-26 18:52:24.673604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.515 passed 00:07:53.515 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.515 Test: blockdev nvme admin passthru ...[2024-11-26 18:52:24.674333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.515 [2024-11-26 18:52:24.674390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.515 passed 00:07:53.515 Test: blockdev copy ...passed 00:07:53.515 Suite: bdevio tests on: Nvme2n3 00:07:53.515 Test: blockdev write read block ...passed 00:07:53.515 Test: blockdev write zeroes read block ...passed 00:07:53.515 Test: blockdev write zeroes read no split ...passed 00:07:53.515 Test: blockdev write zeroes read split ...passed 00:07:53.773 Test: blockdev write zeroes read split partial ...passed 00:07:53.773 Test: blockdev reset ...[2024-11-26 18:52:24.741415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:53.773 [2024-11-26 18:52:24.746053] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:53.773 passed 00:07:53.773 Test: blockdev write read 8 blocks ...passed 00:07:53.773 Test: blockdev write read size > 128k ...passed 00:07:53.773 Test: blockdev write read invalid size ...passed 00:07:53.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.773 Test: blockdev write read max offset ...passed 00:07:53.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.773 Test: blockdev writev readv 8 blocks ...passed 00:07:53.773 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.773 Test: blockdev writev readv block ...passed 00:07:53.773 Test: blockdev writev readv size > 128k ...passed 00:07:53.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.773 Test: blockdev comparev and writev ...[2024-11-26 18:52:24.754404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a1606000 len:0x1000 00:07:53.773 [2024-11-26 18:52:24.754481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.773 passed 00:07:53.773 Test: blockdev nvme passthru rw ...passed 00:07:53.773 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.773 Test: blockdev nvme admin passthru ...[2024-11-26 18:52:24.755301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.773 [2024-11-26 18:52:24.755350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.773 passed 00:07:53.773 Test: blockdev copy ...passed 00:07:53.773 Suite: bdevio tests on: Nvme2n2 00:07:53.773 Test: blockdev write read block ...passed 00:07:53.773 Test: blockdev write zeroes read block ...passed 00:07:53.773 Test: blockdev write zeroes read no split ...passed 00:07:53.773 Test: blockdev write zeroes read split ...passed 00:07:53.773 Test: blockdev write zeroes read split partial ...passed 00:07:53.773 Test: blockdev reset ...[2024-11-26 18:52:24.818529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:53.773 [2024-11-26 18:52:24.822841] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:53.773 passed 00:07:53.773 Test: blockdev write read 8 blocks ...passed 00:07:53.773 Test: blockdev write read size > 128k ...passed 00:07:53.773 Test: blockdev write read invalid size ...passed 00:07:53.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.773 Test: blockdev write read max offset ...passed 00:07:53.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.773 Test: blockdev writev readv 8 blocks ...passed 00:07:53.773 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.773 Test: blockdev writev readv block ...passed 00:07:53.773 Test: blockdev writev readv size > 128k ...passed 00:07:53.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.773 Test: blockdev comparev and writev ...[2024-11-26 18:52:24.832651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:53.773 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce43c000 len:0x1000 00:07:53.773 [2024-11-26 18:52:24.832858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.773 passed 00:07:53.773 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.773 Test: blockdev nvme admin passthru ...[2024-11-26 18:52:24.833650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.773 [2024-11-26 18:52:24.833700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.773 passed 00:07:53.773 Test: blockdev copy ...passed 00:07:53.773 Suite: bdevio tests on: Nvme2n1 00:07:53.773 Test: blockdev write read block ...passed 00:07:53.773 Test: blockdev write zeroes read block ...passed 00:07:53.773 Test: blockdev write zeroes read no split ...passed 00:07:53.773 Test: blockdev write zeroes read split ...passed 00:07:53.773 Test: blockdev write zeroes read split partial ...passed 00:07:53.773 Test: blockdev reset ...[2024-11-26 18:52:24.910218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:53.773 [2024-11-26 18:52:24.914803] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:53.773 passed 00:07:53.773 Test: blockdev write read 8 blocks ...passed 00:07:53.773 Test: blockdev write read size > 128k ...passed 00:07:53.773 Test: blockdev write read invalid size ...passed 00:07:53.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.773 Test: blockdev write read max offset ...passed 00:07:53.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.773 Test: blockdev writev readv 8 blocks ...passed 00:07:53.773 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.773 Test: blockdev writev readv block ...passed 00:07:53.773 Test: blockdev writev readv size > 128k ...passed 00:07:53.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.774 Test: blockdev comparev and writev ...[2024-11-26 18:52:24.924830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce438000 len:0x1000 00:07:53.774 [2024-11-26 18:52:24.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.774 passed 00:07:53.774 Test: blockdev nvme passthru rw ...passed 00:07:53.774 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:52:24.925794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.774 [2024-11-26 18:52:24.925837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.774 passed 00:07:53.774 Test: blockdev nvme admin passthru ...passed 00:07:53.774 Test: blockdev copy ...passed 00:07:53.774 Suite: bdevio tests on: Nvme1n1 00:07:53.774 Test: blockdev write read block ...passed 00:07:53.774 Test: blockdev write zeroes read block ...passed 00:07:53.774 Test: blockdev write zeroes read no split ...passed 00:07:53.774 Test: blockdev write zeroes read split ...passed 00:07:54.032 Test: blockdev write zeroes read split partial ...passed 00:07:54.032 Test: blockdev reset ...[2024-11-26 18:52:24.987266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:54.032 passed 00:07:54.032 Test: blockdev write read 8 blocks ...[2024-11-26 18:52:24.990969] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:54.032 passed 00:07:54.032 Test: blockdev write read size > 128k ...passed 00:07:54.032 Test: blockdev write read invalid size ...passed 00:07:54.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.032 Test: blockdev write read max offset ...passed 00:07:54.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.032 Test: blockdev writev readv 8 blocks ...passed 00:07:54.032 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.032 Test: blockdev writev readv block ...passed 00:07:54.032 Test: blockdev writev readv size > 128k ...passed 00:07:54.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.032 Test: blockdev comparev and writev ...[2024-11-26 18:52:25.001821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:54.032 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce434000 len:0x1000 00:07:54.032 [2024-11-26 18:52:25.002011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.032 passed 00:07:54.032 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.032 Test: blockdev nvme admin passthru ...[2024-11-26 18:52:25.002823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:54.032 [2024-11-26 18:52:25.002876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:54.032 passed 00:07:54.032 Test: blockdev copy ...passed 00:07:54.032 Suite: bdevio tests on: Nvme0n1 00:07:54.032 Test: blockdev write read block ...passed 00:07:54.032 Test: blockdev write zeroes read block ...passed 00:07:54.032 Test: blockdev write zeroes read no split ...passed 00:07:54.032 Test: blockdev write zeroes read split ...passed 00:07:54.032 Test: blockdev write zeroes read split partial ...passed 00:07:54.032 Test: blockdev reset ...[2024-11-26 18:52:25.079557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:54.032 passed 00:07:54.032 Test: blockdev write read 8 blocks ...[2024-11-26 18:52:25.083355] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:54.032 passed 00:07:54.032 Test: blockdev write read size > 128k ...passed 00:07:54.032 Test: blockdev write read invalid size ...passed 00:07:54.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.032 Test: blockdev write read max offset ...passed 00:07:54.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.032 Test: blockdev writev readv 8 blocks ...passed 00:07:54.032 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.032 Test: blockdev writev readv block ...passed 00:07:54.032 Test: blockdev writev readv size > 128k ...passed 00:07:54.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.032 Test: blockdev comparev and writev ...passed 00:07:54.032 Test: blockdev nvme passthru rw ...[2024-11-26 18:52:25.092146] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:54.032 separate metadata which is not supported yet. 00:07:54.032 passed 00:07:54.032 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.032 Test: blockdev nvme admin passthru ...[2024-11-26 18:52:25.092697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:54.032 [2024-11-26 18:52:25.092760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:54.032 passed 00:07:54.032 Test: blockdev copy ...passed 00:07:54.032 00:07:54.032 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.032 suites 6 6 n/a 0 0 00:07:54.032 tests 138 138 138 0 0 00:07:54.032 asserts 893 893 893 0 n/a 00:07:54.032 00:07:54.032 Elapsed time = 1.373 seconds 00:07:54.032 0 00:07:54.032 18:52:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61486 00:07:54.032 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61486 ']' 00:07:54.032 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61486 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61486 00:07:54.033 killing process with pid 61486 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61486' 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61486 00:07:54.033 18:52:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61486 00:07:54.966 18:52:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:54.966 00:07:54.966 real 0m2.751s 00:07:54.966 user 0m7.119s 00:07:54.966 sys 0m0.367s 00:07:54.966 18:52:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.966 ************************************ 00:07:54.966 END TEST bdev_bounds 00:07:54.966 ************************************ 00:07:54.966 18:52:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 18:52:26 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:54.966 18:52:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.966 18:52:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.966 18:52:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.966 ************************************ 00:07:54.966 START TEST bdev_nbd 00:07:54.966 ************************************ 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61540 00:07:54.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61540 /var/tmp/spdk-nbd.sock 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61540 ']' 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.966 18:52:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:55.225 [2024-11-26 18:52:26.214153] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:07:55.225 [2024-11-26 18:52:26.214374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.225 [2024-11-26 18:52:26.409081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.483 [2024-11-26 18:52:26.545013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:56.051 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.617 1+0 records in 00:07:56.617 1+0 records out 00:07:56.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565301 s, 7.2 MB/s 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.617 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:56.618 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.876 1+0 records in 00:07:56.876 1+0 records out 00:07:56.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565453 s, 7.2 MB/s 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:56.876 18:52:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.195 1+0 records in 00:07:57.195 1+0 records out 00:07:57.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478316 s, 8.6 MB/s 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:57.195 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.454 1+0 records in 00:07:57.454 1+0 records out 00:07:57.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555984 s, 7.4 MB/s 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:57.454 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.021 1+0 records in 00:07:58.021 1+0 records out 00:07:58.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790764 s, 5.2 MB/s 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:58.021 18:52:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.279 1+0 records in 00:07:58.279 1+0 records out 00:07:58.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649447 s, 6.3 MB/s 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:58.279 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd0", 00:07:58.537 "bdev_name": "Nvme0n1" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd1", 00:07:58.537 "bdev_name": "Nvme1n1" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd2", 00:07:58.537 "bdev_name": "Nvme2n1" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd3", 00:07:58.537 "bdev_name": "Nvme2n2" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd4", 00:07:58.537 "bdev_name": "Nvme2n3" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd5", 00:07:58.537 "bdev_name": "Nvme3n1" 00:07:58.537 } 00:07:58.537 ]' 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd0", 00:07:58.537 "bdev_name": "Nvme0n1" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd1", 00:07:58.537 "bdev_name": "Nvme1n1" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd2", 00:07:58.537 "bdev_name": "Nvme2n1" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd3", 00:07:58.537 "bdev_name": "Nvme2n2" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd4", 00:07:58.537 "bdev_name": "Nvme2n3" 00:07:58.537 }, 00:07:58.537 { 00:07:58.537 "nbd_device": "/dev/nbd5", 00:07:58.537 "bdev_name": "Nvme3n1" 00:07:58.537 } 00:07:58.537 ]' 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.537 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.795 18:52:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:59.053 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:59.310 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:59.310 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.311 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.568 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.826 18:52:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.085 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.343 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:00.910 18:52:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:01.167 /dev/nbd0 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.167 1+0 records in 00:08:01.167 1+0 records out 00:08:01.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880083 s, 4.7 MB/s 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:01.167 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:01.484 /dev/nbd1 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.484 1+0 records in 00:08:01.484 1+0 records out 00:08:01.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462752 s, 8.9 MB/s 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:01.484 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:01.743 /dev/nbd10 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.743 1+0 records in 00:08:01.743 1+0 records out 00:08:01.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498817 s, 8.2 MB/s 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:01.743 18:52:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:02.309 /dev/nbd11 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.309 1+0 records in 00:08:02.309 1+0 records out 00:08:02.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440962 s, 9.3 MB/s 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.309 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.310 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:02.568 /dev/nbd12 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.568 1+0 records in 00:08:02.568 1+0 records out 00:08:02.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500456 s, 8.2 MB/s 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.568 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:02.826 /dev/nbd13 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.826 1+0 records in 00:08:02.826 1+0 records out 00:08:02.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574236 s, 7.1 MB/s 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.826 18:52:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.085 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd0", 00:08:03.085 "bdev_name": "Nvme0n1" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd1", 00:08:03.085 "bdev_name": "Nvme1n1" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd10", 00:08:03.085 "bdev_name": "Nvme2n1" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd11", 00:08:03.085 "bdev_name": "Nvme2n2" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd12", 00:08:03.085 "bdev_name": "Nvme2n3" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd13", 00:08:03.085 "bdev_name": "Nvme3n1" 00:08:03.085 } 00:08:03.085 ]' 00:08:03.085 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.085 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd0", 00:08:03.085 "bdev_name": "Nvme0n1" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd1", 00:08:03.085 "bdev_name": "Nvme1n1" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd10", 00:08:03.085 "bdev_name": "Nvme2n1" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd11", 00:08:03.085 "bdev_name": "Nvme2n2" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd12", 00:08:03.085 "bdev_name": "Nvme2n3" 00:08:03.085 }, 00:08:03.085 { 00:08:03.085 "nbd_device": "/dev/nbd13", 00:08:03.085 "bdev_name": "Nvme3n1" 00:08:03.085 } 00:08:03.085 ]' 00:08:03.085 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.085 /dev/nbd1 00:08:03.085 /dev/nbd10 00:08:03.085 /dev/nbd11 00:08:03.085 /dev/nbd12 00:08:03.085 /dev/nbd13' 00:08:03.085 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.085 /dev/nbd1 00:08:03.085 /dev/nbd10 00:08:03.085 /dev/nbd11 00:08:03.085 /dev/nbd12 00:08:03.085 /dev/nbd13' 00:08:03.085 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:03.343 256+0 records in 00:08:03.343 256+0 records out 00:08:03.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00931168 s, 113 MB/s 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.343 256+0 records in 00:08:03.343 256+0 records out 00:08:03.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124716 s, 8.4 MB/s 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.343 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.601 256+0 records in 00:08:03.601 256+0 records out 00:08:03.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144121 s, 7.3 MB/s 00:08:03.601 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.601 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:03.601 256+0 records in 00:08:03.601 256+0 records out 00:08:03.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156643 s, 6.7 MB/s 00:08:03.601 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.601 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:03.858 256+0 records in 00:08:03.858 256+0 records out 00:08:03.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13373 s, 7.8 MB/s 00:08:03.859 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.859 18:52:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:03.859 256+0 records in 00:08:03.859 256+0 records out 00:08:03.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133656 s, 7.8 MB/s 00:08:03.859 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.859 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:04.116 256+0 records in 00:08:04.116 256+0 records out 00:08:04.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126035 s, 8.3 MB/s 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.116 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.373 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.631 18:52:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.196 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.469 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.755 18:52:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.014 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:06.272 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:06.837 malloc_lvol_verify 00:08:06.837 18:52:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:07.095 81c33198-64e3-4e81-8e72-e85f8e885b40 00:08:07.095 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:07.353 2ce845c8-f7ce-41c3-88f8-f1c8c2e9b5c8 00:08:07.353 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:07.610 /dev/nbd0 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:07.610 mke2fs 1.47.0 (5-Feb-2023) 00:08:07.610 Discarding device blocks: 0/4096 done 00:08:07.610 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:07.610 00:08:07.610 Allocating group tables: 0/1 done 00:08:07.610 Writing inode tables: 0/1 done 00:08:07.610 Creating journal (1024 blocks): done 00:08:07.610 Writing superblocks and filesystem accounting information: 0/1 done 00:08:07.610 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.610 18:52:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61540 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61540 ']' 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61540 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61540 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.869 killing process with pid 61540 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61540' 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61540 00:08:07.869 18:52:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61540 00:08:09.244 18:52:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:09.244 00:08:09.244 real 0m14.067s 00:08:09.244 user 0m20.785s 00:08:09.244 sys 0m4.166s 00:08:09.244 18:52:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.244 18:52:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:09.244 ************************************ 00:08:09.244 END TEST bdev_nbd 00:08:09.244 ************************************ 00:08:09.244 18:52:40 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:09.244 18:52:40 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:09.244 skipping fio tests on NVMe due to multi-ns failures. 00:08:09.244 18:52:40 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:09.244 18:52:40 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.244 18:52:40 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.244 18:52:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:09.244 18:52:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.244 18:52:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.244 ************************************ 00:08:09.244 START TEST bdev_verify 00:08:09.244 ************************************ 00:08:09.244 18:52:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.244 [2024-11-26 18:52:40.329459] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:09.244 [2024-11-26 18:52:40.329725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61963 ] 00:08:09.501 [2024-11-26 18:52:40.517231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.501 [2024-11-26 18:52:40.629835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.501 [2024-11-26 18:52:40.629835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.435 Running I/O for 5 seconds... 00:08:12.304 19136.00 IOPS, 74.75 MiB/s [2024-11-26T18:52:44.895Z] 20224.00 IOPS, 79.00 MiB/s [2024-11-26T18:52:45.882Z] 19925.33 IOPS, 77.83 MiB/s [2024-11-26T18:52:46.471Z] 19824.00 IOPS, 77.44 MiB/s [2024-11-26T18:52:46.471Z] 19660.80 IOPS, 76.80 MiB/s 00:08:15.256 Latency(us) 00:08:15.256 [2024-11-26T18:52:46.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.256 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x0 length 0xbd0bd 00:08:15.256 Nvme0n1 : 5.06 1617.53 6.32 0.00 0.00 78928.13 15728.64 84839.33 00:08:15.256 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:15.256 Nvme0n1 : 5.06 1618.80 6.32 0.00 0.00 78814.80 16205.27 96278.34 00:08:15.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x0 length 0xa0000 00:08:15.256 Nvme1n1 : 5.07 1616.27 6.31 0.00 0.00 78788.52 17873.45 80073.08 00:08:15.256 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0xa0000 length 0xa0000 00:08:15.256 Nvme1n1 : 5.06 1618.28 6.32 0.00 0.00 78659.95 19184.17 93418.59 00:08:15.256 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x0 length 0x80000 00:08:15.256 Nvme2n1 : 5.07 1615.77 6.31 0.00 0.00 78653.85 18230.92 73876.95 00:08:15.256 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x80000 length 0x80000 00:08:15.256 Nvme2n1 : 5.06 1617.76 6.32 0.00 0.00 78495.26 18826.71 91035.46 00:08:15.256 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x0 length 0x80000 00:08:15.256 Nvme2n2 : 5.07 1615.28 6.31 0.00 0.00 78502.26 17992.61 77213.32 00:08:15.256 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x80000 length 0x80000 00:08:15.256 Nvme2n2 : 5.07 1617.07 6.32 0.00 0.00 78351.56 18350.08 91035.46 00:08:15.256 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x0 length 0x80000 00:08:15.256 Nvme2n3 : 5.07 1614.81 6.31 0.00 0.00 78350.24 16920.20 80073.08 00:08:15.256 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x80000 length 0x80000 00:08:15.256 Nvme2n3 : 5.08 1625.61 6.35 0.00 0.00 77802.61 3619.37 94371.84 00:08:15.256 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x0 length 0x20000 00:08:15.256 Nvme3n1 : 5.08 1624.59 6.35 0.00 0.00 77745.17 2800.17 86269.21 00:08:15.256 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.256 Verification LBA range: start 0x20000 length 0x20000 00:08:15.256 Nvme3n1 : 5.09 1633.70 6.38 0.00 0.00 77309.48 8579.26 97231.59 00:08:15.256 [2024-11-26T18:52:46.471Z] =================================================================================================================== 00:08:15.256 [2024-11-26T18:52:46.471Z] Total : 19435.47 75.92 0.00 0.00 78364.67 2800.17 97231.59 00:08:16.635 00:08:16.635 real 0m7.471s 00:08:16.635 user 0m13.815s 00:08:16.635 sys 0m0.248s 00:08:16.635 18:52:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.635 18:52:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.635 ************************************ 00:08:16.635 END TEST bdev_verify 00:08:16.635 ************************************ 00:08:16.635 18:52:47 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.635 18:52:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:16.635 18:52:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.635 18:52:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.635 ************************************ 00:08:16.635 START TEST bdev_verify_big_io 00:08:16.635 ************************************ 00:08:16.635 18:52:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.635 [2024-11-26 18:52:47.837033] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:16.635 [2024-11-26 18:52:47.837199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62063 ] 00:08:16.893 [2024-11-26 18:52:48.006331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.152 [2024-11-26 18:52:48.112218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.152 [2024-11-26 18:52:48.112238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.086 Running I/O for 5 seconds... 00:08:23.148 633.00 IOPS, 39.56 MiB/s [2024-11-26T18:52:55.298Z] 2126.50 IOPS, 132.91 MiB/s [2024-11-26T18:52:55.299Z] 2960.67 IOPS, 185.04 MiB/s 00:08:24.084 Latency(us) 00:08:24.084 [2024-11-26T18:52:55.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.084 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x0 length 0xbd0b 00:08:24.084 Nvme0n1 : 5.70 123.53 7.72 0.00 0.00 994289.78 21686.46 1098145.05 00:08:24.084 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:24.084 Nvme0n1 : 5.71 116.82 7.30 0.00 0.00 1045229.62 32410.53 1098145.05 00:08:24.084 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x0 length 0xa000 00:08:24.084 Nvme1n1 : 5.88 126.43 7.90 0.00 0.00 935084.94 72447.07 888429.85 00:08:24.084 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0xa000 length 0xa000 00:08:24.084 Nvme1n1 : 5.78 121.71 7.61 0.00 0.00 984700.74 70540.57 1121023.07 00:08:24.084 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x0 length 0x8000 00:08:24.084 Nvme2n1 : 5.88 125.84 7.86 0.00 0.00 906343.24 72447.07 793104.76 00:08:24.084 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x8000 length 0x8000 00:08:24.084 Nvme2n1 : 5.94 124.85 7.80 0.00 0.00 927510.24 88652.33 1159153.11 00:08:24.084 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x0 length 0x8000 00:08:24.084 Nvme2n2 : 5.88 130.53 8.16 0.00 0.00 859137.86 103427.72 827421.79 00:08:24.084 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x8000 length 0x8000 00:08:24.084 Nvme2n2 : 5.94 124.80 7.80 0.00 0.00 897333.61 90082.21 1197283.14 00:08:24.084 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x0 length 0x8000 00:08:24.084 Nvme2n3 : 5.98 139.16 8.70 0.00 0.00 786291.79 35270.28 861738.82 00:08:24.084 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x8000 length 0x8000 00:08:24.084 Nvme2n3 : 6.00 132.86 8.30 0.00 0.00 825053.76 56241.80 1227787.17 00:08:24.084 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x0 length 0x2000 00:08:24.084 Nvme3n1 : 6.00 149.43 9.34 0.00 0.00 712186.68 3932.16 880803.84 00:08:24.084 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.084 Verification LBA range: start 0x2000 length 0x2000 00:08:24.084 Nvme3n1 : 6.02 144.31 9.02 0.00 0.00 739744.44 8579.26 1265917.21 00:08:24.084 [2024-11-26T18:52:55.299Z] =================================================================================================================== 00:08:24.084 [2024-11-26T18:52:55.299Z] Total : 1560.26 97.52 0.00 0.00 875960.29 3932.16 1265917.21 00:08:25.460 00:08:25.460 real 0m8.888s 00:08:25.460 user 0m16.648s 00:08:25.460 sys 0m0.270s 00:08:25.460 18:52:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.460 18:52:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:25.460 ************************************ 00:08:25.460 END TEST bdev_verify_big_io 00:08:25.460 ************************************ 00:08:25.719 18:52:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.719 18:52:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:25.719 18:52:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.719 18:52:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.719 ************************************ 00:08:25.719 START TEST bdev_write_zeroes 00:08:25.719 ************************************ 00:08:25.719 18:52:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.719 [2024-11-26 18:52:56.777139] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:25.719 [2024-11-26 18:52:56.777339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62176 ] 00:08:25.978 [2024-11-26 18:52:56.951208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.978 [2024-11-26 18:52:57.054379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.544 Running I/O for 1 seconds... 00:08:27.918 46080.00 IOPS, 180.00 MiB/s 00:08:27.918 Latency(us) 00:08:27.918 [2024-11-26T18:52:59.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.918 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.918 Nvme0n1 : 1.03 7648.19 29.88 0.00 0.00 16670.72 5362.04 40513.16 00:08:27.918 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.918 Nvme1n1 : 1.03 7624.55 29.78 0.00 0.00 16688.23 11736.90 27286.81 00:08:27.918 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.918 Nvme2n1 : 1.04 7601.14 29.69 0.00 0.00 16673.36 11141.12 26929.34 00:08:27.918 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.918 Nvme2n2 : 1.04 7578.00 29.60 0.00 0.00 16628.57 7506.85 26810.18 00:08:27.918 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.918 Nvme2n3 : 1.04 7555.26 29.51 0.00 0.00 16639.71 7149.38 26929.34 00:08:27.918 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:27.918 Nvme3n1 : 1.05 7532.29 29.42 0.00 0.00 16650.96 7119.59 27525.12 00:08:27.918 [2024-11-26T18:52:59.133Z] =================================================================================================================== 00:08:27.918 [2024-11-26T18:52:59.133Z] Total : 45539.42 177.89 0.00 0.00 16658.59 5362.04 40513.16 00:08:28.861 00:08:28.861 real 0m3.192s 00:08:28.861 user 0m2.837s 00:08:28.861 sys 0m0.232s 00:08:28.861 18:52:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.861 18:52:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 ************************************ 00:08:28.861 END TEST bdev_write_zeroes 00:08:28.861 ************************************ 00:08:28.861 18:52:59 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.861 18:52:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:28.861 18:52:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.861 18:52:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 ************************************ 00:08:28.861 START TEST bdev_json_nonenclosed 00:08:28.861 ************************************ 00:08:28.861 18:52:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.861 [2024-11-26 18:53:00.020609] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:28.861 [2024-11-26 18:53:00.020814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62229 ] 00:08:29.120 [2024-11-26 18:53:00.199399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.120 [2024-11-26 18:53:00.326515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.120 [2024-11-26 18:53:00.326646] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:29.120 [2024-11-26 18:53:00.326680] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:29.120 [2024-11-26 18:53:00.326698] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.685 00:08:29.685 real 0m0.678s 00:08:29.686 user 0m0.459s 00:08:29.686 sys 0m0.114s 00:08:29.686 18:53:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.686 18:53:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:29.686 ************************************ 00:08:29.686 END TEST bdev_json_nonenclosed 00:08:29.686 ************************************ 00:08:29.686 18:53:00 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.686 18:53:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:29.686 18:53:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.686 18:53:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.686 ************************************ 00:08:29.686 START TEST bdev_json_nonarray 00:08:29.686 ************************************ 00:08:29.686 18:53:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.686 [2024-11-26 18:53:00.745817] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:29.686 [2024-11-26 18:53:00.746152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:08:29.944 [2024-11-26 18:53:00.918385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.944 [2024-11-26 18:53:01.023330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.944 [2024-11-26 18:53:01.023439] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:29.944 [2024-11-26 18:53:01.023466] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:29.944 [2024-11-26 18:53:01.023481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.203 00:08:30.203 real 0m0.610s 00:08:30.203 user 0m0.394s 00:08:30.203 sys 0m0.112s 00:08:30.203 18:53:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.203 ************************************ 00:08:30.203 18:53:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 END TEST bdev_json_nonarray 00:08:30.203 ************************************ 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:30.203 18:53:01 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:30.203 00:08:30.203 real 0m44.180s 00:08:30.203 user 1m8.119s 00:08:30.203 sys 0m6.541s 00:08:30.203 18:53:01 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.203 18:53:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 ************************************ 00:08:30.203 END TEST blockdev_nvme 00:08:30.203 ************************************ 00:08:30.203 18:53:01 -- spdk/autotest.sh@209 -- # uname -s 00:08:30.203 18:53:01 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:30.203 18:53:01 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:30.203 18:53:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.203 18:53:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.203 18:53:01 -- common/autotest_common.sh@10 -- # set +x 00:08:30.203 ************************************ 00:08:30.203 START TEST blockdev_nvme_gpt 00:08:30.203 ************************************ 00:08:30.203 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:30.462 * Looking for test storage... 00:08:30.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.462 18:53:01 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.462 --rc genhtml_branch_coverage=1 00:08:30.462 --rc genhtml_function_coverage=1 00:08:30.462 --rc genhtml_legend=1 00:08:30.462 --rc geninfo_all_blocks=1 00:08:30.462 --rc geninfo_unexecuted_blocks=1 00:08:30.462 00:08:30.462 ' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.462 --rc genhtml_branch_coverage=1 00:08:30.462 --rc genhtml_function_coverage=1 00:08:30.462 --rc genhtml_legend=1 00:08:30.462 --rc geninfo_all_blocks=1 00:08:30.462 --rc geninfo_unexecuted_blocks=1 00:08:30.462 00:08:30.462 ' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.462 --rc genhtml_branch_coverage=1 00:08:30.462 --rc genhtml_function_coverage=1 00:08:30.462 --rc genhtml_legend=1 00:08:30.462 --rc geninfo_all_blocks=1 00:08:30.462 --rc geninfo_unexecuted_blocks=1 00:08:30.462 00:08:30.462 ' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.462 --rc genhtml_branch_coverage=1 00:08:30.462 --rc genhtml_function_coverage=1 00:08:30.462 --rc genhtml_legend=1 00:08:30.462 --rc geninfo_all_blocks=1 00:08:30.462 --rc geninfo_unexecuted_blocks=1 00:08:30.462 00:08:30.462 ' 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:30.462 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62343 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62343 00:08:30.463 18:53:01 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:30.463 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62343 ']' 00:08:30.463 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.463 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.463 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.463 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.463 18:53:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:30.721 [2024-11-26 18:53:01.686402] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:30.721 [2024-11-26 18:53:01.686564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62343 ] 00:08:30.721 [2024-11-26 18:53:01.868307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.978 [2024-11-26 18:53:02.010047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.912 18:53:02 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.912 18:53:02 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:31.912 18:53:02 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:31.912 18:53:02 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:31.912 18:53:02 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:31.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.170 Waiting for block devices as requested 00:08:32.170 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.497 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.497 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.497 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:37.822 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:37.823 BYT; 00:08:37.823 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:37.823 BYT; 00:08:37.823 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:37.823 18:53:08 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:37.823 18:53:08 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:38.770 The operation has completed successfully. 00:08:38.770 18:53:09 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:39.705 The operation has completed successfully. 00:08:39.705 18:53:10 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:40.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.840 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.840 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.840 18:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:40.840 18:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.840 18:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.840 [] 00:08:40.840 18:53:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.840 18:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:40.840 18:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:40.840 18:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:40.840 18:53:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.840 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:40.840 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.840 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.409 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:41.409 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:41.410 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7f52cd01-6633-402b-9942-af6ac03e74cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7f52cd01-6633-402b-9942-af6ac03e74cd",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "94d405e7-fad2-4e1b-b6d1-1823e5f0204e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94d405e7-fad2-4e1b-b6d1-1823e5f0204e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cd2d5c9c-9226-4254-97e9-9af3371b6b99"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd2d5c9c-9226-4254-97e9-9af3371b6b99",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c285b2c1-03a6-4761-8273-29b720c11af2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c285b2c1-03a6-4761-8273-29b720c11af2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b2b4b5a7-076d-4ab1-9644-1567febe2d9b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b2b4b5a7-076d-4ab1-9644-1567febe2d9b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:41.410 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:41.410 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:41.410 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:41.410 18:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62343 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62343 ']' 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62343 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62343 00:08:41.410 killing process with pid 62343 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62343' 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62343 00:08:41.410 18:53:12 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62343 00:08:43.940 18:53:14 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.940 18:53:14 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:43.940 18:53:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:43.940 18:53:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.940 18:53:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:43.940 ************************************ 00:08:43.940 START TEST bdev_hello_world 00:08:43.940 ************************************ 00:08:43.940 18:53:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:43.940 [2024-11-26 18:53:14.765988] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:43.940 [2024-11-26 18:53:14.766201] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:08:43.940 [2024-11-26 18:53:14.953737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.940 [2024-11-26 18:53:15.078394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.875 [2024-11-26 18:53:15.727393] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:44.875 [2024-11-26 18:53:15.727657] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:44.875 [2024-11-26 18:53:15.727711] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:44.875 [2024-11-26 18:53:15.730816] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:44.875 [2024-11-26 18:53:15.731408] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:44.875 [2024-11-26 18:53:15.731453] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:44.875 [2024-11-26 18:53:15.731632] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:44.875 00:08:44.875 [2024-11-26 18:53:15.731668] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:45.810 00:08:45.810 real 0m2.067s 00:08:45.810 user 0m1.722s 00:08:45.810 sys 0m0.232s 00:08:45.810 ************************************ 00:08:45.810 END TEST bdev_hello_world 00:08:45.810 ************************************ 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 18:53:16 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:45.810 18:53:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.810 18:53:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.810 18:53:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 ************************************ 00:08:45.810 START TEST bdev_bounds 00:08:45.810 ************************************ 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:45.810 Process bdevio pid: 63017 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63017 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63017' 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63017 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63017 ']' 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.810 18:53:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 [2024-11-26 18:53:16.869994] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:45.810 [2024-11-26 18:53:16.870157] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63017 ] 00:08:46.069 [2024-11-26 18:53:17.046726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.069 [2024-11-26 18:53:17.153567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.069 [2024-11-26 18:53:17.153695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.069 [2024-11-26 18:53:17.153697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.005 18:53:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.005 18:53:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:47.005 18:53:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:47.005 I/O targets: 00:08:47.005 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:47.005 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:47.005 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:47.005 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:47.005 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:47.005 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:47.005 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:47.005 00:08:47.005 00:08:47.005 CUnit - A unit testing framework for C - Version 2.1-3 00:08:47.005 http://cunit.sourceforge.net/ 00:08:47.005 00:08:47.005 00:08:47.005 Suite: bdevio tests on: Nvme3n1 00:08:47.005 Test: blockdev write read block ...passed 00:08:47.005 Test: blockdev write zeroes read block ...passed 00:08:47.005 Test: blockdev write zeroes read no split ...passed 00:08:47.005 Test: blockdev write zeroes read split ...passed 00:08:47.005 Test: blockdev write zeroes read split partial ...passed 00:08:47.005 Test: blockdev reset ...[2024-11-26 18:53:18.100133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:47.005 passed 00:08:47.005 Test: blockdev write read 8 blocks ...[2024-11-26 18:53:18.104386] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:47.005 passed 00:08:47.005 Test: blockdev write read size > 128k ...passed 00:08:47.005 Test: blockdev write read invalid size ...passed 00:08:47.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.005 Test: blockdev write read max offset ...passed 00:08:47.005 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.005 Test: blockdev writev readv 8 blocks ...passed 00:08:47.005 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.005 Test: blockdev writev readv block ...passed 00:08:47.005 Test: blockdev writev readv size > 128k ...passed 00:08:47.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.005 Test: blockdev comparev and writev ...[2024-11-26 18:53:18.113976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:47.005 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bbc04000 len:0x1000 00:08:47.005 [2024-11-26 18:53:18.114218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.005 passed 00:08:47.005 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:53:18.115196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.005 [2024-11-26 18:53:18.115251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.006 passed 00:08:47.006 Test: blockdev nvme admin passthru ...passed 00:08:47.006 Test: blockdev copy ...passed 00:08:47.006 Suite: bdevio tests on: Nvme2n3 00:08:47.006 Test: blockdev write read block ...passed 00:08:47.006 Test: blockdev write zeroes read block ...passed 00:08:47.006 Test: blockdev write zeroes read no split ...passed 00:08:47.006 Test: blockdev write zeroes read split ...passed 00:08:47.006 Test: blockdev write zeroes read split partial ...passed 00:08:47.006 Test: blockdev reset ...[2024-11-26 18:53:18.200269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:47.006 [2024-11-26 18:53:18.204853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:47.006 passed 00:08:47.006 Test: blockdev write read 8 blocks ...passed 00:08:47.006 Test: blockdev write read size > 128k ...passed 00:08:47.006 Test: blockdev write read invalid size ...passed 00:08:47.006 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.006 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.006 Test: blockdev write read max offset ...passed 00:08:47.006 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.006 Test: blockdev writev readv 8 blocks ...passed 00:08:47.006 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.006 Test: blockdev writev readv block ...passed 00:08:47.006 Test: blockdev writev readv size > 128k ...passed 00:08:47.006 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.006 Test: blockdev comparev and writev ...[2024-11-26 18:53:18.213573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:47.006 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bbc02000 len:0x1000 00:08:47.006 [2024-11-26 18:53:18.213807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.006 passed 00:08:47.006 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.006 Test: blockdev nvme admin passthru ...[2024-11-26 18:53:18.214684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.006 [2024-11-26 18:53:18.214754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.264 passed 00:08:47.264 Test: blockdev copy ...passed 00:08:47.264 Suite: bdevio tests on: Nvme2n2 00:08:47.264 Test: blockdev write read block ...passed 00:08:47.264 Test: blockdev write zeroes read block ...passed 00:08:47.264 Test: blockdev write zeroes read no split ...passed 00:08:47.264 Test: blockdev write zeroes read split ...passed 00:08:47.264 Test: blockdev write zeroes read split partial ...passed 00:08:47.264 Test: blockdev reset ...[2024-11-26 18:53:18.290979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:47.264 passed 00:08:47.264 Test: blockdev write read 8 blocks ...[2024-11-26 18:53:18.295695] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:47.264 passed 00:08:47.264 Test: blockdev write read size > 128k ...passed 00:08:47.264 Test: blockdev write read invalid size ...passed 00:08:47.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.264 Test: blockdev write read max offset ...passed 00:08:47.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.264 Test: blockdev writev readv 8 blocks ...passed 00:08:47.264 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.264 Test: blockdev writev readv block ...passed 00:08:47.264 Test: blockdev writev readv size > 128k ...passed 00:08:47.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.264 Test: blockdev comparev and writev ...[2024-11-26 18:53:18.305212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:47.264 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d0238000 len:0x1000 00:08:47.264 [2024-11-26 18:53:18.305442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.264 passed 00:08:47.264 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.264 Test: blockdev nvme admin passthru ...[2024-11-26 18:53:18.306389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.264 [2024-11-26 18:53:18.306449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.264 passed 00:08:47.264 Test: blockdev copy ...passed 00:08:47.264 Suite: bdevio tests on: Nvme2n1 00:08:47.264 Test: blockdev write read block ...passed 00:08:47.264 Test: blockdev write zeroes read block ...passed 00:08:47.264 Test: blockdev write zeroes read no split ...passed 00:08:47.264 Test: blockdev write zeroes read split ...passed 00:08:47.264 Test: blockdev write zeroes read split partial ...passed 00:08:47.264 Test: blockdev reset ...[2024-11-26 18:53:18.372804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:47.264 [2024-11-26 18:53:18.377479] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:47.264 passed 00:08:47.264 Test: blockdev write read 8 blocks ...passed 00:08:47.264 Test: blockdev write read size > 128k ...passed 00:08:47.264 Test: blockdev write read invalid size ...passed 00:08:47.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.264 Test: blockdev write read max offset ...passed 00:08:47.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.264 Test: blockdev writev readv 8 blocks ...passed 00:08:47.264 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.264 Test: blockdev writev readv block ...passed 00:08:47.264 Test: blockdev writev readv size > 128k ...passed 00:08:47.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.264 Test: blockdev comparev and writev ...[2024-11-26 18:53:18.386493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0234000 len:0x1000 00:08:47.264 [2024-11-26 18:53:18.386570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.264 passed 00:08:47.264 Test: blockdev nvme passthru rw ...passed 00:08:47.264 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.264 Test: blockdev nvme admin passthru ...[2024-11-26 18:53:18.387513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.264 [2024-11-26 18:53:18.387572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.264 passed 00:08:47.264 Test: blockdev copy ...passed 00:08:47.264 Suite: bdevio tests on: Nvme1n1p2 00:08:47.264 Test: blockdev write read block ...passed 00:08:47.264 Test: blockdev write zeroes read block ...passed 00:08:47.264 Test: blockdev write zeroes read no split ...passed 00:08:47.264 Test: blockdev write zeroes read split ...passed 00:08:47.264 Test: blockdev write zeroes read split partial ...passed 00:08:47.264 Test: blockdev reset ...[2024-11-26 18:53:18.468877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:47.264 [2024-11-26 18:53:18.473325] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:47.264 passed 00:08:47.264 Test: blockdev write read 8 blocks ...passed 00:08:47.264 Test: blockdev write read size > 128k ...passed 00:08:47.264 Test: blockdev write read invalid size ...passed 00:08:47.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.523 Test: blockdev write read max offset ...passed 00:08:47.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.523 Test: blockdev writev readv 8 blocks ...passed 00:08:47.523 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.523 Test: blockdev writev readv block ...passed 00:08:47.523 Test: blockdev writev readv size > 128k ...passed 00:08:47.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.523 Test: blockdev comparev and writev ...[2024-11-26 18:53:18.489753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d0230000 len:0x1000 00:08:47.523 [2024-11-26 18:53:18.489833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.523 passed 00:08:47.523 Test: blockdev nvme passthru rw ...passed 00:08:47.523 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.523 Test: blockdev nvme admin passthru ...passed 00:08:47.523 Test: blockdev copy ...passed 00:08:47.523 Suite: bdevio tests on: Nvme1n1p1 00:08:47.523 Test: blockdev write read block ...passed 00:08:47.523 Test: blockdev write zeroes read block ...passed 00:08:47.523 Test: blockdev write zeroes read no split ...passed 00:08:47.523 Test: blockdev write zeroes read split ...passed 00:08:47.523 Test: blockdev write zeroes read split partial ...passed 00:08:47.523 Test: blockdev reset ...[2024-11-26 18:53:18.584474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:47.523 [2024-11-26 18:53:18.588156] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:47.523 passed 00:08:47.523 Test: blockdev write read 8 blocks ...passed 00:08:47.523 Test: blockdev write read size > 128k ...passed 00:08:47.523 Test: blockdev write read invalid size ...passed 00:08:47.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.523 Test: blockdev write read max offset ...passed 00:08:47.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.523 Test: blockdev writev readv 8 blocks ...passed 00:08:47.523 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.523 Test: blockdev writev readv block ...passed 00:08:47.523 Test: blockdev writev readv size > 128k ...passed 00:08:47.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.523 Test: blockdev comparev and writev ...[2024-11-26 18:53:18.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bc60e000 len:0x1000 00:08:47.523 [2024-11-26 18:53:18.598512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.523 passed 00:08:47.523 Test: blockdev nvme passthru rw ...passed 00:08:47.523 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.523 Test: blockdev nvme admin passthru ...passed 00:08:47.523 Test: blockdev copy ...passed 00:08:47.523 Suite: bdevio tests on: Nvme0n1 00:08:47.523 Test: blockdev write read block ...passed 00:08:47.523 Test: blockdev write zeroes read block ...passed 00:08:47.523 Test: blockdev write zeroes read no split ...passed 00:08:47.523 Test: blockdev write zeroes read split ...passed 00:08:47.523 Test: blockdev write zeroes read split partial ...passed 00:08:47.523 Test: blockdev reset ...[2024-11-26 18:53:18.704674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:47.523 passed 00:08:47.523 Test: blockdev write read 8 blocks ...[2024-11-26 18:53:18.708544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:47.523 passed 00:08:47.523 Test: blockdev write read size > 128k ...passed 00:08:47.523 Test: blockdev write read invalid size ...passed 00:08:47.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.523 Test: blockdev write read max offset ...passed 00:08:47.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.523 Test: blockdev writev readv 8 blocks ...passed 00:08:47.523 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.523 Test: blockdev writev readv block ...passed 00:08:47.524 Test: blockdev writev readv size > 128k ...passed 00:08:47.524 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.524 Test: blockdev comparev and writev ...passed 00:08:47.524 Test: blockdev nvme passthru rw ...[2024-11-26 18:53:18.716403] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:47.524 separate metadata which is not supported yet. 00:08:47.524 passed 00:08:47.524 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.524 Test: blockdev nvme admin passthru ...[2024-11-26 18:53:18.717052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:47.524 [2024-11-26 18:53:18.717116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:47.524 passed 00:08:47.524 Test: blockdev copy ...passed 00:08:47.524 00:08:47.524 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.524 suites 7 7 n/a 0 0 00:08:47.524 tests 161 161 161 0 0 00:08:47.524 asserts 1025 1025 1025 0 n/a 00:08:47.524 00:08:47.524 Elapsed time = 1.863 seconds 00:08:47.524 0 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63017 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63017 ']' 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63017 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63017 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63017' 00:08:47.783 killing process with pid 63017 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63017 00:08:47.783 18:53:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63017 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:48.718 00:08:48.718 real 0m2.925s 00:08:48.718 user 0m7.691s 00:08:48.718 sys 0m0.369s 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:48.718 ************************************ 00:08:48.718 END TEST bdev_bounds 00:08:48.718 ************************************ 00:08:48.718 18:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:48.718 18:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.718 18:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.718 18:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.718 ************************************ 00:08:48.718 START TEST bdev_nbd 00:08:48.718 ************************************ 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:48.718 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63081 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63081 /var/tmp/spdk-nbd.sock 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63081 ']' 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.719 18:53:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [2024-11-26 18:53:19.853967] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:08:48.719 [2024-11-26 18:53:19.854120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.976 [2024-11-26 18:53:20.033477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.976 [2024-11-26 18:53:20.161099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.912 18:53:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:50.170 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:50.170 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:50.170 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.171 1+0 records in 00:08:50.171 1+0 records out 00:08:50.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689853 s, 5.9 MB/s 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:50.171 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.429 1+0 records in 00:08:50.429 1+0 records out 00:08:50.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368282 s, 11.1 MB/s 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:50.429 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.688 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.947 1+0 records in 00:08:50.947 1+0 records out 00:08:50.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068241 s, 6.0 MB/s 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:50.947 18:53:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.206 1+0 records in 00:08:51.206 1+0 records out 00:08:51.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607287 s, 6.7 MB/s 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:51.206 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.465 1+0 records in 00:08:51.465 1+0 records out 00:08:51.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658627 s, 6.2 MB/s 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:51.465 18:53:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:52.032 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.033 1+0 records in 00:08:52.033 1+0 records out 00:08:52.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655829 s, 6.2 MB/s 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:52.033 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.291 1+0 records in 00:08:52.291 1+0 records out 00:08:52.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081585 s, 5.0 MB/s 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.291 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:52.292 18:53:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:52.292 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:52.292 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:52.292 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.550 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd0", 00:08:52.550 "bdev_name": "Nvme0n1" 00:08:52.550 }, 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd1", 00:08:52.550 "bdev_name": "Nvme1n1p1" 00:08:52.550 }, 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd2", 00:08:52.550 "bdev_name": "Nvme1n1p2" 00:08:52.550 }, 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd3", 00:08:52.550 "bdev_name": "Nvme2n1" 00:08:52.550 }, 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd4", 00:08:52.550 "bdev_name": "Nvme2n2" 00:08:52.550 }, 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd5", 00:08:52.550 "bdev_name": "Nvme2n3" 00:08:52.550 }, 00:08:52.550 { 00:08:52.550 "nbd_device": "/dev/nbd6", 00:08:52.550 "bdev_name": "Nvme3n1" 00:08:52.550 } 00:08:52.550 ]' 00:08:52.550 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:52.550 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:52.550 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:52.550 { 00:08:52.551 "nbd_device": "/dev/nbd0", 00:08:52.551 "bdev_name": "Nvme0n1" 00:08:52.551 }, 00:08:52.551 { 00:08:52.551 "nbd_device": "/dev/nbd1", 00:08:52.551 "bdev_name": "Nvme1n1p1" 00:08:52.551 }, 00:08:52.551 { 00:08:52.551 "nbd_device": "/dev/nbd2", 00:08:52.551 "bdev_name": "Nvme1n1p2" 00:08:52.551 }, 00:08:52.551 { 00:08:52.551 "nbd_device": "/dev/nbd3", 00:08:52.551 "bdev_name": "Nvme2n1" 00:08:52.551 }, 00:08:52.551 { 00:08:52.551 "nbd_device": "/dev/nbd4", 00:08:52.551 "bdev_name": "Nvme2n2" 00:08:52.551 }, 00:08:52.551 { 00:08:52.551 "nbd_device": "/dev/nbd5", 00:08:52.551 "bdev_name": "Nvme2n3" 00:08:52.551 }, 00:08:52.551 { 00:08:52.551 "nbd_device": "/dev/nbd6", 00:08:52.551 "bdev_name": "Nvme3n1" 00:08:52.551 } 00:08:52.551 ]' 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.551 18:53:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.118 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.119 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.377 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.635 18:53:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.893 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.151 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:54.718 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.977 18:53:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:55.235 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:55.493 /dev/nbd0 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.493 1+0 records in 00:08:55.493 1+0 records out 00:08:55.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640483 s, 6.4 MB/s 00:08:55.493 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:55.494 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:55.752 /dev/nbd1 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.752 1+0 records in 00:08:55.752 1+0 records out 00:08:55.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516586 s, 7.9 MB/s 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:55.752 18:53:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:56.319 /dev/nbd10 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.319 1+0 records in 00:08:56.319 1+0 records out 00:08:56.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510502 s, 8.0 MB/s 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:56.319 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:56.319 /dev/nbd11 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.578 1+0 records in 00:08:56.578 1+0 records out 00:08:56.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879309 s, 4.7 MB/s 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:56.578 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:56.837 /dev/nbd12 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.837 1+0 records in 00:08:56.837 1+0 records out 00:08:56.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705083 s, 5.8 MB/s 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.837 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:56.838 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.838 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:56.838 18:53:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:56.838 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.838 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:56.838 18:53:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:57.096 /dev/nbd13 00:08:57.096 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:57.096 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.097 1+0 records in 00:08:57.097 1+0 records out 00:08:57.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072512 s, 5.6 MB/s 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:57.097 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:57.356 /dev/nbd14 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.356 1+0 records in 00:08:57.356 1+0 records out 00:08:57.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690138 s, 5.9 MB/s 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.356 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.924 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd0", 00:08:57.924 "bdev_name": "Nvme0n1" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd1", 00:08:57.924 "bdev_name": "Nvme1n1p1" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd10", 00:08:57.924 "bdev_name": "Nvme1n1p2" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd11", 00:08:57.924 "bdev_name": "Nvme2n1" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd12", 00:08:57.924 "bdev_name": "Nvme2n2" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd13", 00:08:57.924 "bdev_name": "Nvme2n3" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd14", 00:08:57.924 "bdev_name": "Nvme3n1" 00:08:57.924 } 00:08:57.924 ]' 00:08:57.924 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd0", 00:08:57.924 "bdev_name": "Nvme0n1" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd1", 00:08:57.924 "bdev_name": "Nvme1n1p1" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd10", 00:08:57.924 "bdev_name": "Nvme1n1p2" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd11", 00:08:57.924 "bdev_name": "Nvme2n1" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd12", 00:08:57.924 "bdev_name": "Nvme2n2" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd13", 00:08:57.924 "bdev_name": "Nvme2n3" 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "nbd_device": "/dev/nbd14", 00:08:57.924 "bdev_name": "Nvme3n1" 00:08:57.924 } 00:08:57.924 ]' 00:08:57.924 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.924 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:57.924 /dev/nbd1 00:08:57.924 /dev/nbd10 00:08:57.924 /dev/nbd11 00:08:57.924 /dev/nbd12 00:08:57.924 /dev/nbd13 00:08:57.924 /dev/nbd14' 00:08:57.924 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:57.925 /dev/nbd1 00:08:57.925 /dev/nbd10 00:08:57.925 /dev/nbd11 00:08:57.925 /dev/nbd12 00:08:57.925 /dev/nbd13 00:08:57.925 /dev/nbd14' 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:57.925 256+0 records in 00:08:57.925 256+0 records out 00:08:57.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00901299 s, 116 MB/s 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.925 18:53:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:57.925 256+0 records in 00:08:57.925 256+0 records out 00:08:57.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156073 s, 6.7 MB/s 00:08:57.925 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.925 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.182 256+0 records in 00:08:58.182 256+0 records out 00:08:58.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165691 s, 6.3 MB/s 00:08:58.182 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.182 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:58.459 256+0 records in 00:08:58.459 256+0 records out 00:08:58.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165333 s, 6.3 MB/s 00:08:58.459 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.459 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:58.459 256+0 records in 00:08:58.459 256+0 records out 00:08:58.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157765 s, 6.6 MB/s 00:08:58.459 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.459 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:58.721 256+0 records in 00:08:58.721 256+0 records out 00:08:58.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147083 s, 7.1 MB/s 00:08:58.721 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.721 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:58.721 256+0 records in 00:08:58.721 256+0 records out 00:08:58.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163503 s, 6.4 MB/s 00:08:58.721 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.721 18:53:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:58.979 256+0 records in 00:08:58.979 256+0 records out 00:08:58.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14965 s, 7.0 MB/s 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.979 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:59.237 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.238 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.238 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.238 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.805 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.806 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.806 18:53:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.064 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:00.321 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.322 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.887 18:53:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:00.887 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:00.887 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:00.887 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:00.887 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.887 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.887 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:01.146 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.146 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.146 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.146 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.404 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:01.662 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:01.663 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:01.663 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:01.663 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:01.663 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.663 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:01.663 18:53:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:01.921 malloc_lvol_verify 00:09:01.921 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:02.179 93e9dc8e-6084-4d27-a60f-7c5518eb3fc3 00:09:02.179 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:02.437 87615251-1b3e-4fee-81a4-09015acec3c3 00:09:02.437 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:03.004 /dev/nbd0 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:03.004 mke2fs 1.47.0 (5-Feb-2023) 00:09:03.004 Discarding device blocks: 0/4096 done 00:09:03.004 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:03.004 00:09:03.004 Allocating group tables: 0/1 done 00:09:03.004 Writing inode tables: 0/1 done 00:09:03.004 Creating journal (1024 blocks): done 00:09:03.004 Writing superblocks and filesystem accounting information: 0/1 done 00:09:03.004 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.004 18:53:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63081 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63081 ']' 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63081 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63081 00:09:03.262 killing process with pid 63081 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63081' 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63081 00:09:03.262 18:53:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63081 00:09:04.205 ************************************ 00:09:04.205 END TEST bdev_nbd 00:09:04.205 ************************************ 00:09:04.205 18:53:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:04.205 00:09:04.205 real 0m15.628s 00:09:04.205 user 0m22.851s 00:09:04.205 sys 0m4.861s 00:09:04.205 18:53:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.205 18:53:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:04.464 18:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:04.464 18:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:04.464 skipping fio tests on NVMe due to multi-ns failures. 00:09:04.464 18:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:04.464 18:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:04.464 18:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:04.464 18:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:04.464 18:53:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:04.464 18:53:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.464 18:53:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.464 ************************************ 00:09:04.464 START TEST bdev_verify 00:09:04.464 ************************************ 00:09:04.464 18:53:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:04.464 [2024-11-26 18:53:35.535346] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:04.464 [2024-11-26 18:53:35.535519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63537 ] 00:09:04.723 [2024-11-26 18:53:35.728514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.723 [2024-11-26 18:53:35.857277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.723 [2024-11-26 18:53:35.857281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.658 Running I/O for 5 seconds... 00:09:07.969 19904.00 IOPS, 77.75 MiB/s [2024-11-26T18:53:40.116Z] 19648.00 IOPS, 76.75 MiB/s [2024-11-26T18:53:41.075Z] 19178.67 IOPS, 74.92 MiB/s [2024-11-26T18:53:42.008Z] 19104.00 IOPS, 74.62 MiB/s [2024-11-26T18:53:42.008Z] 18982.40 IOPS, 74.15 MiB/s 00:09:10.793 Latency(us) 00:09:10.793 [2024-11-26T18:53:42.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.793 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0xbd0bd 00:09:10.793 Nvme0n1 : 5.07 1375.35 5.37 0.00 0.00 92605.09 11319.85 92465.34 00:09:10.793 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:10.793 Nvme0n1 : 5.06 1289.91 5.04 0.00 0.00 98786.29 21567.30 85315.96 00:09:10.793 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0x4ff80 00:09:10.793 Nvme1n1p1 : 5.07 1374.89 5.37 0.00 0.00 92518.69 9413.35 89128.96 00:09:10.793 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:10.793 Nvme1n1p1 : 5.06 1289.35 5.04 0.00 0.00 98552.06 23354.65 83886.08 00:09:10.793 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0x4ff7f 00:09:10.793 Nvme1n1p2 : 5.08 1374.42 5.37 0.00 0.00 92406.33 9353.77 88175.71 00:09:10.793 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:10.793 Nvme1n1p2 : 5.09 1294.39 5.06 0.00 0.00 98021.21 9532.51 82932.83 00:09:10.793 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0x80000 00:09:10.793 Nvme2n1 : 5.09 1383.17 5.40 0.00 0.00 92001.48 11439.01 86745.83 00:09:10.793 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x80000 length 0x80000 00:09:10.793 Nvme2n1 : 5.10 1293.50 5.05 0.00 0.00 97880.26 11558.17 79596.45 00:09:10.793 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0x80000 00:09:10.793 Nvme2n2 : 5.09 1382.77 5.40 0.00 0.00 91888.92 11498.59 89128.96 00:09:10.793 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x80000 length 0x80000 00:09:10.793 Nvme2n2 : 5.11 1301.53 5.08 0.00 0.00 97342.13 12988.04 80549.70 00:09:10.793 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0x80000 00:09:10.793 Nvme2n3 : 5.09 1382.32 5.40 0.00 0.00 91769.65 11379.43 93418.59 00:09:10.793 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x80000 length 0x80000 00:09:10.793 Nvme2n3 : 5.12 1301.08 5.08 0.00 0.00 97226.11 13405.09 82456.20 00:09:10.793 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x0 length 0x20000 00:09:10.793 Nvme3n1 : 5.10 1381.41 5.40 0.00 0.00 91661.78 12809.31 94371.84 00:09:10.793 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.793 Verification LBA range: start 0x20000 length 0x20000 00:09:10.793 Nvme3n1 : 5.12 1300.58 5.08 0.00 0.00 97162.10 13047.62 86269.21 00:09:10.793 [2024-11-26T18:53:42.008Z] =================================================================================================================== 00:09:10.793 [2024-11-26T18:53:42.008Z] Total : 18724.65 73.14 0.00 0.00 94897.58 9353.77 94371.84 00:09:12.233 00:09:12.233 real 0m7.672s 00:09:12.233 user 0m14.183s 00:09:12.233 sys 0m0.258s 00:09:12.233 18:53:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.233 18:53:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:12.233 ************************************ 00:09:12.233 END TEST bdev_verify 00:09:12.233 ************************************ 00:09:12.233 18:53:43 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:12.233 18:53:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:12.233 18:53:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.233 18:53:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.233 ************************************ 00:09:12.234 START TEST bdev_verify_big_io 00:09:12.234 ************************************ 00:09:12.234 18:53:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:12.234 [2024-11-26 18:53:43.249112] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:12.234 [2024-11-26 18:53:43.249272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63641 ] 00:09:12.234 [2024-11-26 18:53:43.428604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.493 [2024-11-26 18:53:43.556296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.493 [2024-11-26 18:53:43.556308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.430 Running I/O for 5 seconds... 00:09:19.252 1522.00 IOPS, 95.12 MiB/s [2024-11-26T18:53:50.725Z] 2608.50 IOPS, 163.03 MiB/s [2024-11-26T18:53:50.983Z] 3147.00 IOPS, 196.69 MiB/s 00:09:19.768 Latency(us) 00:09:19.768 [2024-11-26T18:53:50.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.768 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0xbd0b 00:09:19.768 Nvme0n1 : 5.94 113.13 7.07 0.00 0.00 1060959.13 16681.89 1502323.43 00:09:19.768 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:19.768 Nvme0n1 : 5.85 104.87 6.55 0.00 0.00 1165428.25 24546.21 1082893.03 00:09:19.768 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0x4ff8 00:09:19.768 Nvme1n1p1 : 5.82 115.46 7.22 0.00 0.00 1013090.81 101044.60 1265917.21 00:09:19.768 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:19.768 Nvme1n1p1 : 5.85 109.44 6.84 0.00 0.00 1110133.29 96754.97 937998.89 00:09:19.768 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0x4ff7 00:09:19.768 Nvme1n1p2 : 6.02 123.13 7.70 0.00 0.00 922920.39 45756.04 1021884.97 00:09:19.768 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:19.768 Nvme1n1p2 : 5.85 109.38 6.84 0.00 0.00 1083133.21 97231.59 934185.89 00:09:19.768 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0x8000 00:09:19.768 Nvme2n1 : 6.02 125.91 7.87 0.00 0.00 874520.94 28120.90 1105771.05 00:09:19.768 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x8000 length 0x8000 00:09:19.768 Nvme2n1 : 5.91 105.60 6.60 0.00 0.00 1083946.73 118679.74 1037136.99 00:09:19.768 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0x8000 00:09:19.768 Nvme2n2 : 6.05 124.09 7.76 0.00 0.00 852289.53 28240.06 1624339.55 00:09:19.768 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x8000 length 0x8000 00:09:19.768 Nvme2n2 : 5.91 113.67 7.10 0.00 0.00 992782.63 56003.49 1067641.02 00:09:19.768 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0x8000 00:09:19.768 Nvme2n3 : 6.15 110.12 6.88 0.00 0.00 939141.70 15966.95 2120030.02 00:09:19.768 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x8000 length 0x8000 00:09:19.768 Nvme2n3 : 5.95 118.41 7.40 0.00 0.00 926437.09 30265.72 1098145.05 00:09:19.768 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x0 length 0x2000 00:09:19.768 Nvme3n1 : 6.30 163.91 10.24 0.00 0.00 609819.38 528.76 2150534.05 00:09:19.768 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.768 Verification LBA range: start 0x2000 length 0x2000 00:09:19.768 Nvme3n1 : 5.96 128.91 8.06 0.00 0.00 827231.11 4915.20 1128649.08 00:09:19.768 [2024-11-26T18:53:50.983Z] =================================================================================================================== 00:09:19.768 [2024-11-26T18:53:50.983Z] Total : 1666.04 104.13 0.00 0.00 942883.42 528.76 2150534.05 00:09:21.667 00:09:21.667 real 0m9.275s 00:09:21.667 user 0m17.404s 00:09:21.667 sys 0m0.263s 00:09:21.667 18:53:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.667 18:53:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:21.667 ************************************ 00:09:21.667 END TEST bdev_verify_big_io 00:09:21.667 ************************************ 00:09:21.667 18:53:52 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.667 18:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:21.667 18:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.667 18:53:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:21.667 ************************************ 00:09:21.667 START TEST bdev_write_zeroes 00:09:21.667 ************************************ 00:09:21.667 18:53:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.667 [2024-11-26 18:53:52.583965] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:21.667 [2024-11-26 18:53:52.584150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63761 ] 00:09:21.667 [2024-11-26 18:53:52.767503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.667 [2024-11-26 18:53:52.871629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.601 Running I/O for 1 seconds... 00:09:23.535 43904.00 IOPS, 171.50 MiB/s 00:09:23.535 Latency(us) 00:09:23.535 [2024-11-26T18:53:54.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.535 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme0n1 : 1.03 6260.65 24.46 0.00 0.00 20368.05 7208.96 38368.35 00:09:23.535 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme1n1p1 : 1.03 6246.35 24.40 0.00 0.00 20378.29 14358.34 31218.97 00:09:23.535 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme1n1p2 : 1.04 6232.51 24.35 0.00 0.00 20341.82 14477.50 30384.87 00:09:23.535 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme2n1 : 1.04 6221.68 24.30 0.00 0.00 20312.10 13524.25 29193.31 00:09:23.535 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme2n2 : 1.04 6212.09 24.27 0.00 0.00 20283.96 10545.34 28835.84 00:09:23.535 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme2n3 : 1.04 6202.34 24.23 0.00 0.00 20260.87 9532.51 29193.31 00:09:23.535 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:23.535 Nvme3n1 : 1.04 6192.80 24.19 0.00 0.00 20253.26 9353.77 30742.34 00:09:23.535 [2024-11-26T18:53:54.750Z] =================================================================================================================== 00:09:23.535 [2024-11-26T18:53:54.750Z] Total : 43568.41 170.19 0.00 0.00 20314.05 7208.96 38368.35 00:09:24.494 00:09:24.494 real 0m3.197s 00:09:24.494 user 0m2.825s 00:09:24.494 sys 0m0.245s 00:09:24.494 18:53:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.494 18:53:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:24.494 ************************************ 00:09:24.494 END TEST bdev_write_zeroes 00:09:24.494 ************************************ 00:09:24.752 18:53:55 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.752 18:53:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:24.752 18:53:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.752 18:53:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.752 ************************************ 00:09:24.752 START TEST bdev_json_nonenclosed 00:09:24.752 ************************************ 00:09:24.752 18:53:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.752 [2024-11-26 18:53:55.813183] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:24.752 [2024-11-26 18:53:55.813341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63814 ] 00:09:25.010 [2024-11-26 18:53:55.995412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.010 [2024-11-26 18:53:56.123066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.010 [2024-11-26 18:53:56.123231] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:25.010 [2024-11-26 18:53:56.123267] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:25.010 [2024-11-26 18:53:56.123284] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:25.268 00:09:25.268 real 0m0.681s 00:09:25.268 user 0m0.469s 00:09:25.268 sys 0m0.106s 00:09:25.268 18:53:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.268 18:53:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:25.268 ************************************ 00:09:25.268 END TEST bdev_json_nonenclosed 00:09:25.268 ************************************ 00:09:25.268 18:53:56 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.268 18:53:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:25.268 18:53:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.268 18:53:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:25.268 ************************************ 00:09:25.268 START TEST bdev_json_nonarray 00:09:25.268 ************************************ 00:09:25.268 18:53:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.526 [2024-11-26 18:53:56.539923] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:25.526 [2024-11-26 18:53:56.540084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63839 ] 00:09:25.526 [2024-11-26 18:53:56.714845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.785 [2024-11-26 18:53:56.817943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.785 [2024-11-26 18:53:56.818067] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:25.785 [2024-11-26 18:53:56.818095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:25.785 [2024-11-26 18:53:56.818109] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.044 00:09:26.044 real 0m0.628s 00:09:26.044 user 0m0.396s 00:09:26.044 sys 0m0.127s 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:26.044 ************************************ 00:09:26.044 END TEST bdev_json_nonarray 00:09:26.044 ************************************ 00:09:26.044 18:53:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:26.044 18:53:57 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:26.044 18:53:57 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:26.044 18:53:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.044 18:53:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.044 18:53:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.044 ************************************ 00:09:26.044 START TEST bdev_gpt_uuid 00:09:26.044 ************************************ 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63865 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63865 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63865 ']' 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.044 18:53:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:26.044 [2024-11-26 18:53:57.253889] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:26.044 [2024-11-26 18:53:57.254060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63865 ] 00:09:26.301 [2024-11-26 18:53:57.435309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.559 [2024-11-26 18:53:57.573545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.492 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.492 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:27.492 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:27.492 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.492 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:27.751 Some configs were skipped because the RPC state that can call them passed over. 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.751 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:27.751 { 00:09:27.751 "name": "Nvme1n1p1", 00:09:27.752 "aliases": [ 00:09:27.752 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:27.752 ], 00:09:27.752 "product_name": "GPT Disk", 00:09:27.752 "block_size": 4096, 00:09:27.752 "num_blocks": 655104, 00:09:27.752 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:27.752 "assigned_rate_limits": { 00:09:27.752 "rw_ios_per_sec": 0, 00:09:27.752 "rw_mbytes_per_sec": 0, 00:09:27.752 "r_mbytes_per_sec": 0, 00:09:27.752 "w_mbytes_per_sec": 0 00:09:27.752 }, 00:09:27.752 "claimed": false, 00:09:27.752 "zoned": false, 00:09:27.752 "supported_io_types": { 00:09:27.752 "read": true, 00:09:27.752 "write": true, 00:09:27.752 "unmap": true, 00:09:27.752 "flush": true, 00:09:27.752 "reset": true, 00:09:27.752 "nvme_admin": false, 00:09:27.752 "nvme_io": false, 00:09:27.752 "nvme_io_md": false, 00:09:27.752 "write_zeroes": true, 00:09:27.752 "zcopy": false, 00:09:27.752 "get_zone_info": false, 00:09:27.752 "zone_management": false, 00:09:27.752 "zone_append": false, 00:09:27.752 "compare": true, 00:09:27.752 "compare_and_write": false, 00:09:27.752 "abort": true, 00:09:27.752 "seek_hole": false, 00:09:27.752 "seek_data": false, 00:09:27.752 "copy": true, 00:09:27.752 "nvme_iov_md": false 00:09:27.752 }, 00:09:27.752 "driver_specific": { 00:09:27.752 "gpt": { 00:09:27.752 "base_bdev": "Nvme1n1", 00:09:27.752 "offset_blocks": 256, 00:09:27.752 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:27.752 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:27.752 "partition_name": "SPDK_TEST_first" 00:09:27.752 } 00:09:27.752 } 00:09:27.752 } 00:09:27.752 ]' 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:27.752 { 00:09:27.752 "name": "Nvme1n1p2", 00:09:27.752 "aliases": [ 00:09:27.752 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:27.752 ], 00:09:27.752 "product_name": "GPT Disk", 00:09:27.752 "block_size": 4096, 00:09:27.752 "num_blocks": 655103, 00:09:27.752 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:27.752 "assigned_rate_limits": { 00:09:27.752 "rw_ios_per_sec": 0, 00:09:27.752 "rw_mbytes_per_sec": 0, 00:09:27.752 "r_mbytes_per_sec": 0, 00:09:27.752 "w_mbytes_per_sec": 0 00:09:27.752 }, 00:09:27.752 "claimed": false, 00:09:27.752 "zoned": false, 00:09:27.752 "supported_io_types": { 00:09:27.752 "read": true, 00:09:27.752 "write": true, 00:09:27.752 "unmap": true, 00:09:27.752 "flush": true, 00:09:27.752 "reset": true, 00:09:27.752 "nvme_admin": false, 00:09:27.752 "nvme_io": false, 00:09:27.752 "nvme_io_md": false, 00:09:27.752 "write_zeroes": true, 00:09:27.752 "zcopy": false, 00:09:27.752 "get_zone_info": false, 00:09:27.752 "zone_management": false, 00:09:27.752 "zone_append": false, 00:09:27.752 "compare": true, 00:09:27.752 "compare_and_write": false, 00:09:27.752 "abort": true, 00:09:27.752 "seek_hole": false, 00:09:27.752 "seek_data": false, 00:09:27.752 "copy": true, 00:09:27.752 "nvme_iov_md": false 00:09:27.752 }, 00:09:27.752 "driver_specific": { 00:09:27.752 "gpt": { 00:09:27.752 "base_bdev": "Nvme1n1", 00:09:27.752 "offset_blocks": 655360, 00:09:27.752 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:27.752 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:27.752 "partition_name": "SPDK_TEST_second" 00:09:27.752 } 00:09:27.752 } 00:09:27.752 } 00:09:27.752 ]' 00:09:27.752 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:28.010 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:28.010 18:53:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:28.010 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63865 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63865 ']' 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63865 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63865 00:09:28.011 killing process with pid 63865 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63865' 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63865 00:09:28.011 18:53:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63865 00:09:30.569 ************************************ 00:09:30.569 END TEST bdev_gpt_uuid 00:09:30.569 ************************************ 00:09:30.569 00:09:30.569 real 0m4.061s 00:09:30.569 user 0m4.444s 00:09:30.569 sys 0m0.437s 00:09:30.569 18:54:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.569 18:54:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:30.569 18:54:01 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.569 Waiting for block devices as requested 00:09:30.569 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.826 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.826 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.092 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.092 18:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:36.092 18:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:36.357 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:36.357 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:36.357 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:36.357 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:36.357 18:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:36.357 00:09:36.357 real 1m5.983s 00:09:36.357 user 1m26.707s 00:09:36.357 sys 0m9.839s 00:09:36.357 18:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.357 ************************************ 00:09:36.357 END TEST blockdev_nvme_gpt 00:09:36.357 18:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.357 ************************************ 00:09:36.357 18:54:07 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:36.357 18:54:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.357 18:54:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.357 18:54:07 -- common/autotest_common.sh@10 -- # set +x 00:09:36.357 ************************************ 00:09:36.357 START TEST nvme 00:09:36.357 ************************************ 00:09:36.357 18:54:07 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:36.357 * Looking for test storage... 00:09:36.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.357 18:54:07 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.357 18:54:07 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.357 18:54:07 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.615 18:54:07 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.615 18:54:07 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.615 18:54:07 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.615 18:54:07 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.615 18:54:07 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.615 18:54:07 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.615 18:54:07 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:36.615 18:54:07 nvme -- scripts/common.sh@345 -- # : 1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.615 18:54:07 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.615 18:54:07 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@353 -- # local d=1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.615 18:54:07 nvme -- scripts/common.sh@355 -- # echo 1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.615 18:54:07 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@353 -- # local d=2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.615 18:54:07 nvme -- scripts/common.sh@355 -- # echo 2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.615 18:54:07 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.615 18:54:07 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.615 18:54:07 nvme -- scripts/common.sh@368 -- # return 0 00:09:36.615 18:54:07 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.615 18:54:07 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.615 --rc genhtml_branch_coverage=1 00:09:36.615 --rc genhtml_function_coverage=1 00:09:36.615 --rc genhtml_legend=1 00:09:36.615 --rc geninfo_all_blocks=1 00:09:36.615 --rc geninfo_unexecuted_blocks=1 00:09:36.615 00:09:36.615 ' 00:09:36.615 18:54:07 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.615 --rc genhtml_branch_coverage=1 00:09:36.615 --rc genhtml_function_coverage=1 00:09:36.615 --rc genhtml_legend=1 00:09:36.616 --rc geninfo_all_blocks=1 00:09:36.616 --rc geninfo_unexecuted_blocks=1 00:09:36.616 00:09:36.616 ' 00:09:36.616 18:54:07 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.616 --rc genhtml_branch_coverage=1 00:09:36.616 --rc genhtml_function_coverage=1 00:09:36.616 --rc genhtml_legend=1 00:09:36.616 --rc geninfo_all_blocks=1 00:09:36.616 --rc geninfo_unexecuted_blocks=1 00:09:36.616 00:09:36.616 ' 00:09:36.616 18:54:07 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.616 --rc genhtml_branch_coverage=1 00:09:36.616 --rc genhtml_function_coverage=1 00:09:36.616 --rc genhtml_legend=1 00:09:36.616 --rc geninfo_all_blocks=1 00:09:36.616 --rc geninfo_unexecuted_blocks=1 00:09:36.616 00:09:36.616 ' 00:09:36.616 18:54:07 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:36.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.441 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.441 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.700 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.700 18:54:08 nvme -- nvme/nvme.sh@79 -- # uname 00:09:37.700 18:54:08 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:37.700 18:54:08 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:37.700 18:54:08 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:37.700 Waiting for stub to ready for secondary processes... 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1075 -- # stubpid=64518 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64518 ]] 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:37.700 18:54:08 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:37.700 [2024-11-26 18:54:08.813489] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:09:37.700 [2024-11-26 18:54:08.813816] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:38.634 [2024-11-26 18:54:09.574489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.634 [2024-11-26 18:54:09.694479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.634 [2024-11-26 18:54:09.694640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.634 [2024-11-26 18:54:09.694660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.634 [2024-11-26 18:54:09.716095] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:38.634 [2024-11-26 18:54:09.716150] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:38.634 [2024-11-26 18:54:09.727549] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:38.634 [2024-11-26 18:54:09.727717] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:38.634 [2024-11-26 18:54:09.730866] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:38.634 [2024-11-26 18:54:09.731128] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:38.634 [2024-11-26 18:54:09.731242] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:38.634 [2024-11-26 18:54:09.734265] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:38.634 [2024-11-26 18:54:09.734548] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:38.634 [2024-11-26 18:54:09.734661] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:38.634 [2024-11-26 18:54:09.737462] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:38.634 [2024-11-26 18:54:09.737924] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:38.634 [2024-11-26 18:54:09.738026] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:38.634 [2024-11-26 18:54:09.738090] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:38.634 [2024-11-26 18:54:09.738151] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:38.634 done. 00:09:38.634 18:54:09 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:38.634 18:54:09 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:38.634 18:54:09 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:38.634 18:54:09 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:38.634 18:54:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.634 18:54:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 ************************************ 00:09:38.634 START TEST nvme_reset 00:09:38.634 ************************************ 00:09:38.634 18:54:09 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:38.893 Initializing NVMe Controllers 00:09:38.893 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:38.893 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:38.893 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:38.893 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:38.893 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:39.151 ************************************ 00:09:39.151 END TEST nvme_reset 00:09:39.151 ************************************ 00:09:39.151 00:09:39.151 real 0m0.325s 00:09:39.151 user 0m0.149s 00:09:39.151 sys 0m0.129s 00:09:39.151 18:54:10 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.151 18:54:10 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:39.151 18:54:10 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:39.151 18:54:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.151 18:54:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.151 18:54:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.151 ************************************ 00:09:39.151 START TEST nvme_identify 00:09:39.151 ************************************ 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:39.151 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:39.151 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:39.151 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.151 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:39.151 18:54:10 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.151 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:39.412 [2024-11-26 18:54:10.511685] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64539 terminated unexpected 00:09:39.412 ===================================================== 00:09:39.412 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:39.413 ===================================================== 00:09:39.413 Controller Capabilities/Features 00:09:39.413 ================================ 00:09:39.413 Vendor ID: 1b36 00:09:39.413 Subsystem Vendor ID: 1af4 00:09:39.413 Serial Number: 12340 00:09:39.413 Model Number: QEMU NVMe Ctrl 00:09:39.413 Firmware Version: 8.0.0 00:09:39.413 Recommended Arb Burst: 6 00:09:39.413 IEEE OUI Identifier: 00 54 52 00:09:39.413 Multi-path I/O 00:09:39.413 May have multiple subsystem ports: No 00:09:39.413 May have multiple controllers: No 00:09:39.413 Associated with SR-IOV VF: No 00:09:39.413 Max Data Transfer Size: 524288 00:09:39.413 Max Number of Namespaces: 256 00:09:39.413 Max Number of I/O Queues: 64 00:09:39.413 NVMe Specification Version (VS): 1.4 00:09:39.413 NVMe Specification Version (Identify): 1.4 00:09:39.413 Maximum Queue Entries: 2048 00:09:39.413 Contiguous Queues Required: Yes 00:09:39.413 Arbitration Mechanisms Supported 00:09:39.413 Weighted Round Robin: Not Supported 00:09:39.413 Vendor Specific: Not Supported 00:09:39.413 Reset Timeout: 7500 ms 00:09:39.413 Doorbell Stride: 4 bytes 00:09:39.413 NVM Subsystem Reset: Not Supported 00:09:39.413 Command Sets Supported 00:09:39.413 NVM Command Set: Supported 00:09:39.413 Boot Partition: Not Supported 00:09:39.413 Memory Page Size Minimum: 4096 bytes 00:09:39.413 Memory Page Size Maximum: 65536 bytes 00:09:39.413 Persistent Memory Region: Not Supported 00:09:39.413 Optional Asynchronous Events Supported 00:09:39.413 Namespace Attribute Notices: Supported 00:09:39.413 Firmware Activation Notices: Not Supported 00:09:39.413 ANA Change Notices: Not Supported 00:09:39.413 PLE Aggregate Log Change Notices: Not Supported 00:09:39.413 LBA Status Info Alert Notices: Not Supported 00:09:39.413 EGE Aggregate Log Change Notices: Not Supported 00:09:39.413 Normal NVM Subsystem Shutdown event: Not Supported 00:09:39.413 Zone Descriptor Change Notices: Not Supported 00:09:39.413 Discovery Log Change Notices: Not Supported 00:09:39.413 Controller Attributes 00:09:39.413 128-bit Host Identifier: Not Supported 00:09:39.413 Non-Operational Permissive Mode: Not Supported 00:09:39.413 NVM Sets: Not Supported 00:09:39.413 Read Recovery Levels: Not Supported 00:09:39.413 Endurance Groups: Not Supported 00:09:39.413 Predictable Latency Mode: Not Supported 00:09:39.413 Traffic Based Keep ALive: Not Supported 00:09:39.413 Namespace Granularity: Not Supported 00:09:39.413 SQ Associations: Not Supported 00:09:39.413 UUID List: Not Supported 00:09:39.413 Multi-Domain Subsystem: Not Supported 00:09:39.413 Fixed Capacity Management: Not Supported 00:09:39.413 Variable Capacity Management: Not Supported 00:09:39.413 Delete Endurance Group: Not Supported 00:09:39.413 Delete NVM Set: Not Supported 00:09:39.413 Extended LBA Formats Supported: Supported 00:09:39.413 Flexible Data Placement Supported: Not Supported 00:09:39.413 00:09:39.413 Controller Memory Buffer Support 00:09:39.413 ================================ 00:09:39.413 Supported: No 00:09:39.413 00:09:39.413 Persistent Memory Region Support 00:09:39.413 ================================ 00:09:39.413 Supported: No 00:09:39.413 00:09:39.413 Admin Command Set Attributes 00:09:39.413 ============================ 00:09:39.413 Security Send/Receive: Not Supported 00:09:39.413 Format NVM: Supported 00:09:39.413 Firmware Activate/Download: Not Supported 00:09:39.413 Namespace Management: Supported 00:09:39.413 Device Self-Test: Not Supported 00:09:39.413 Directives: Supported 00:09:39.413 NVMe-MI: Not Supported 00:09:39.413 Virtualization Management: Not Supported 00:09:39.413 Doorbell Buffer Config: Supported 00:09:39.413 Get LBA Status Capability: Not Supported 00:09:39.413 Command & Feature Lockdown Capability: Not Supported 00:09:39.413 Abort Command Limit: 4 00:09:39.413 Async Event Request Limit: 4 00:09:39.413 Number of Firmware Slots: N/A 00:09:39.413 Firmware Slot 1 Read-Only: N/A 00:09:39.413 Firmware Activation Without Reset: N/A 00:09:39.413 Multiple Update Detection Support: N/A 00:09:39.413 Firmware Update Granularity: No Information Provided 00:09:39.413 Per-Namespace SMART Log: Yes 00:09:39.413 Asymmetric Namespace Access Log Page: Not Supported 00:09:39.413 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:39.413 Command Effects Log Page: Supported 00:09:39.413 Get Log Page Extended Data: Supported 00:09:39.413 Telemetry Log Pages: Not Supported 00:09:39.413 Persistent Event Log Pages: Not Supported 00:09:39.413 Supported Log Pages Log Page: May Support 00:09:39.413 Commands Supported & Effects Log Page: Not Supported 00:09:39.413 Feature Identifiers & Effects Log Page:May Support 00:09:39.413 NVMe-MI Commands & Effects Log Page: May Support 00:09:39.413 Data Area 4 for Telemetry Log: Not Supported 00:09:39.413 Error Log Page Entries Supported: 1 00:09:39.413 Keep Alive: Not Supported 00:09:39.413 00:09:39.413 NVM Command Set Attributes 00:09:39.413 ========================== 00:09:39.413 Submission Queue Entry Size 00:09:39.413 Max: 64 00:09:39.413 Min: 64 00:09:39.413 Completion Queue Entry Size 00:09:39.413 Max: 16 00:09:39.413 Min: 16 00:09:39.413 Number of Namespaces: 256 00:09:39.413 Compare Command: Supported 00:09:39.413 Write Uncorrectable Command: Not Supported 00:09:39.413 Dataset Management Command: Supported 00:09:39.413 Write Zeroes Command: Supported 00:09:39.413 Set Features Save Field: Supported 00:09:39.413 Reservations: Not Supported 00:09:39.413 Timestamp: Supported 00:09:39.413 Copy: Supported 00:09:39.413 Volatile Write Cache: Present 00:09:39.413 Atomic Write Unit (Normal): 1 00:09:39.413 Atomic Write Unit (PFail): 1 00:09:39.413 Atomic Compare & Write Unit: 1 00:09:39.413 Fused Compare & Write: Not Supported 00:09:39.413 Scatter-Gather List 00:09:39.413 SGL Command Set: Supported 00:09:39.413 SGL Keyed: Not Supported 00:09:39.413 SGL Bit Bucket Descriptor: Not Supported 00:09:39.413 SGL Metadata Pointer: Not Supported 00:09:39.413 Oversized SGL: Not Supported 00:09:39.413 SGL Metadata Address: Not Supported 00:09:39.413 SGL Offset: Not Supported 00:09:39.413 Transport SGL Data Block: Not Supported 00:09:39.413 Replay Protected Memory Block: Not Supported 00:09:39.413 00:09:39.413 Firmware Slot Information 00:09:39.413 ========================= 00:09:39.413 Active slot: 1 00:09:39.413 Slot 1 Firmware Revision: 1.0 00:09:39.413 00:09:39.413 00:09:39.413 Commands Supported and Effects 00:09:39.413 ============================== 00:09:39.413 Admin Commands 00:09:39.413 -------------- 00:09:39.413 Delete I/O Submission Queue (00h): Supported 00:09:39.413 Create I/O Submission Queue (01h): Supported 00:09:39.413 Get Log Page (02h): Supported 00:09:39.413 Delete I/O Completion Queue (04h): Supported 00:09:39.413 Create I/O Completion Queue (05h): Supported 00:09:39.413 Identify (06h): Supported 00:09:39.413 Abort (08h): Supported 00:09:39.413 Set Features (09h): Supported 00:09:39.413 Get Features (0Ah): Supported 00:09:39.413 Asynchronous Event Request (0Ch): Supported 00:09:39.413 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:39.413 Directive Send (19h): Supported 00:09:39.413 Directive Receive (1Ah): Supported 00:09:39.413 Virtualization Management (1Ch): Supported 00:09:39.413 Doorbell Buffer Config (7Ch): Supported 00:09:39.413 Format NVM (80h): Supported LBA-Change 00:09:39.413 I/O Commands 00:09:39.413 ------------ 00:09:39.413 Flush (00h): Supported LBA-Change 00:09:39.413 Write (01h): Supported LBA-Change 00:09:39.413 Read (02h): Supported 00:09:39.413 Compare (05h): Supported 00:09:39.413 Write Zeroes (08h): Supported LBA-Change 00:09:39.413 Dataset Management (09h): Supported LBA-Change 00:09:39.413 Unknown (0Ch): Supported 00:09:39.413 Unknown (12h): Supported 00:09:39.413 Copy (19h): Supported LBA-Change 00:09:39.413 Unknown (1Dh): Supported LBA-Change 00:09:39.413 00:09:39.413 Error Log 00:09:39.413 ========= 00:09:39.413 00:09:39.413 Arbitration 00:09:39.413 =========== 00:09:39.413 Arbitration Burst: no limit 00:09:39.413 00:09:39.413 Power Management 00:09:39.413 ================ 00:09:39.413 Number of Power States: 1 00:09:39.413 Current Power State: Power State #0 00:09:39.413 Power State #0: 00:09:39.413 Max Power: 25.00 W 00:09:39.413 Non-Operational State: Operational 00:09:39.413 Entry Latency: 16 microseconds 00:09:39.413 Exit Latency: 4 microseconds 00:09:39.413 Relative Read Throughput: 0 00:09:39.413 Relative Read Latency: 0 00:09:39.413 Relative Write Throughput: 0 00:09:39.414 Relative Write Latency: 0 00:09:39.414 Idle Power[2024-11-26 18:54:10.513146] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64539 terminated unexpected 00:09:39.414 : Not Reported 00:09:39.414 Active Power: Not Reported 00:09:39.414 Non-Operational Permissive Mode: Not Supported 00:09:39.414 00:09:39.414 Health Information 00:09:39.414 ================== 00:09:39.414 Critical Warnings: 00:09:39.414 Available Spare Space: OK 00:09:39.414 Temperature: OK 00:09:39.414 Device Reliability: OK 00:09:39.414 Read Only: No 00:09:39.414 Volatile Memory Backup: OK 00:09:39.414 Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.414 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:39.414 Available Spare: 0% 00:09:39.414 Available Spare Threshold: 0% 00:09:39.414 Life Percentage Used: 0% 00:09:39.414 Data Units Read: 660 00:09:39.414 Data Units Written: 588 00:09:39.414 Host Read Commands: 33556 00:09:39.414 Host Write Commands: 33342 00:09:39.414 Controller Busy Time: 0 minutes 00:09:39.414 Power Cycles: 0 00:09:39.414 Power On Hours: 0 hours 00:09:39.414 Unsafe Shutdowns: 0 00:09:39.414 Unrecoverable Media Errors: 0 00:09:39.414 Lifetime Error Log Entries: 0 00:09:39.414 Warning Temperature Time: 0 minutes 00:09:39.414 Critical Temperature Time: 0 minutes 00:09:39.414 00:09:39.414 Number of Queues 00:09:39.414 ================ 00:09:39.414 Number of I/O Submission Queues: 64 00:09:39.414 Number of I/O Completion Queues: 64 00:09:39.414 00:09:39.414 ZNS Specific Controller Data 00:09:39.414 ============================ 00:09:39.414 Zone Append Size Limit: 0 00:09:39.414 00:09:39.414 00:09:39.414 Active Namespaces 00:09:39.414 ================= 00:09:39.414 Namespace ID:1 00:09:39.414 Error Recovery Timeout: Unlimited 00:09:39.414 Command Set Identifier: NVM (00h) 00:09:39.414 Deallocate: Supported 00:09:39.414 Deallocated/Unwritten Error: Supported 00:09:39.414 Deallocated Read Value: All 0x00 00:09:39.414 Deallocate in Write Zeroes: Not Supported 00:09:39.414 Deallocated Guard Field: 0xFFFF 00:09:39.414 Flush: Supported 00:09:39.414 Reservation: Not Supported 00:09:39.414 Metadata Transferred as: Separate Metadata Buffer 00:09:39.414 Namespace Sharing Capabilities: Private 00:09:39.414 Size (in LBAs): 1548666 (5GiB) 00:09:39.414 Capacity (in LBAs): 1548666 (5GiB) 00:09:39.414 Utilization (in LBAs): 1548666 (5GiB) 00:09:39.414 Thin Provisioning: Not Supported 00:09:39.414 Per-NS Atomic Units: No 00:09:39.414 Maximum Single Source Range Length: 128 00:09:39.414 Maximum Copy Length: 128 00:09:39.414 Maximum Source Range Count: 128 00:09:39.414 NGUID/EUI64 Never Reused: No 00:09:39.414 Namespace Write Protected: No 00:09:39.414 Number of LBA Formats: 8 00:09:39.414 Current LBA Format: LBA Format #07 00:09:39.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.414 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.414 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.414 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.414 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.414 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.414 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.414 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.414 00:09:39.414 NVM Specific Namespace Data 00:09:39.414 =========================== 00:09:39.414 Logical Block Storage Tag Mask: 0 00:09:39.414 Protection Information Capabilities: 00:09:39.414 16b Guard Protection Information Storage Tag Support: No 00:09:39.414 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.414 Storage Tag Check Read Support: No 00:09:39.414 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.414 ===================================================== 00:09:39.414 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:39.414 ===================================================== 00:09:39.414 Controller Capabilities/Features 00:09:39.414 ================================ 00:09:39.414 Vendor ID: 1b36 00:09:39.414 Subsystem Vendor ID: 1af4 00:09:39.414 Serial Number: 12341 00:09:39.414 Model Number: QEMU NVMe Ctrl 00:09:39.414 Firmware Version: 8.0.0 00:09:39.414 Recommended Arb Burst: 6 00:09:39.414 IEEE OUI Identifier: 00 54 52 00:09:39.414 Multi-path I/O 00:09:39.414 May have multiple subsystem ports: No 00:09:39.414 May have multiple controllers: No 00:09:39.414 Associated with SR-IOV VF: No 00:09:39.414 Max Data Transfer Size: 524288 00:09:39.414 Max Number of Namespaces: 256 00:09:39.414 Max Number of I/O Queues: 64 00:09:39.414 NVMe Specification Version (VS): 1.4 00:09:39.414 NVMe Specification Version (Identify): 1.4 00:09:39.414 Maximum Queue Entries: 2048 00:09:39.414 Contiguous Queues Required: Yes 00:09:39.414 Arbitration Mechanisms Supported 00:09:39.414 Weighted Round Robin: Not Supported 00:09:39.414 Vendor Specific: Not Supported 00:09:39.414 Reset Timeout: 7500 ms 00:09:39.414 Doorbell Stride: 4 bytes 00:09:39.414 NVM Subsystem Reset: Not Supported 00:09:39.414 Command Sets Supported 00:09:39.414 NVM Command Set: Supported 00:09:39.414 Boot Partition: Not Supported 00:09:39.414 Memory Page Size Minimum: 4096 bytes 00:09:39.414 Memory Page Size Maximum: 65536 bytes 00:09:39.414 Persistent Memory Region: Not Supported 00:09:39.414 Optional Asynchronous Events Supported 00:09:39.414 Namespace Attribute Notices: Supported 00:09:39.414 Firmware Activation Notices: Not Supported 00:09:39.414 ANA Change Notices: Not Supported 00:09:39.414 PLE Aggregate Log Change Notices: Not Supported 00:09:39.414 LBA Status Info Alert Notices: Not Supported 00:09:39.414 EGE Aggregate Log Change Notices: Not Supported 00:09:39.414 Normal NVM Subsystem Shutdown event: Not Supported 00:09:39.414 Zone Descriptor Change Notices: Not Supported 00:09:39.414 Discovery Log Change Notices: Not Supported 00:09:39.414 Controller Attributes 00:09:39.414 128-bit Host Identifier: Not Supported 00:09:39.414 Non-Operational Permissive Mode: Not Supported 00:09:39.414 NVM Sets: Not Supported 00:09:39.414 Read Recovery Levels: Not Supported 00:09:39.414 Endurance Groups: Not Supported 00:09:39.414 Predictable Latency Mode: Not Supported 00:09:39.414 Traffic Based Keep ALive: Not Supported 00:09:39.414 Namespace Granularity: Not Supported 00:09:39.414 SQ Associations: Not Supported 00:09:39.414 UUID List: Not Supported 00:09:39.414 Multi-Domain Subsystem: Not Supported 00:09:39.414 Fixed Capacity Management: Not Supported 00:09:39.414 Variable Capacity Management: Not Supported 00:09:39.414 Delete Endurance Group: Not Supported 00:09:39.414 Delete NVM Set: Not Supported 00:09:39.414 Extended LBA Formats Supported: Supported 00:09:39.414 Flexible Data Placement Supported: Not Supported 00:09:39.414 00:09:39.414 Controller Memory Buffer Support 00:09:39.414 ================================ 00:09:39.414 Supported: No 00:09:39.414 00:09:39.414 Persistent Memory Region Support 00:09:39.414 ================================ 00:09:39.414 Supported: No 00:09:39.414 00:09:39.414 Admin Command Set Attributes 00:09:39.415 ============================ 00:09:39.415 Security Send/Receive: Not Supported 00:09:39.415 Format NVM: Supported 00:09:39.415 Firmware Activate/Download: Not Supported 00:09:39.415 Namespace Management: Supported 00:09:39.415 Device Self-Test: Not Supported 00:09:39.415 Directives: Supported 00:09:39.415 NVMe-MI: Not Supported 00:09:39.415 Virtualization Management: Not Supported 00:09:39.415 Doorbell Buffer Config: Supported 00:09:39.415 Get LBA Status Capability: Not Supported 00:09:39.415 Command & Feature Lockdown Capability: Not Supported 00:09:39.415 Abort Command Limit: 4 00:09:39.415 Async Event Request Limit: 4 00:09:39.415 Number of Firmware Slots: N/A 00:09:39.415 Firmware Slot 1 Read-Only: N/A 00:09:39.415 Firmware Activation Without Reset: N/A 00:09:39.415 Multiple Update Detection Support: N/A 00:09:39.415 Firmware Update Granularity: No Information Provided 00:09:39.415 Per-Namespace SMART Log: Yes 00:09:39.415 Asymmetric Namespace Access Log Page: Not Supported 00:09:39.415 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:39.415 Command Effects Log Page: Supported 00:09:39.415 Get Log Page Extended Data: Supported 00:09:39.415 Telemetry Log Pages: Not Supported 00:09:39.415 Persistent Event Log Pages: Not Supported 00:09:39.415 Supported Log Pages Log Page: May Support 00:09:39.415 Commands Supported & Effects Log Page: Not Supported 00:09:39.415 Feature Identifiers & Effects Log Page:May Support 00:09:39.415 NVMe-MI Commands & Effects Log Page: May Support 00:09:39.415 Data Area 4 for Telemetry Log: Not Supported 00:09:39.415 Error Log Page Entries Supported: 1 00:09:39.415 Keep Alive: Not Supported 00:09:39.415 00:09:39.415 NVM Command Set Attributes 00:09:39.415 ========================== 00:09:39.415 Submission Queue Entry Size 00:09:39.415 Max: 64 00:09:39.415 Min: 64 00:09:39.415 Completion Queue Entry Size 00:09:39.415 Max: 16 00:09:39.415 Min: 16 00:09:39.415 Number of Namespaces: 256 00:09:39.415 Compare Command: Supported 00:09:39.415 Write Uncorrectable Command: Not Supported 00:09:39.415 Dataset Management Command: Supported 00:09:39.415 Write Zeroes Command: Supported 00:09:39.415 Set Features Save Field: Supported 00:09:39.415 Reservations: Not Supported 00:09:39.415 Timestamp: Supported 00:09:39.415 Copy: Supported 00:09:39.415 Volatile Write Cache: Present 00:09:39.415 Atomic Write Unit (Normal): 1 00:09:39.415 Atomic Write Unit (PFail): 1 00:09:39.415 Atomic Compare & Write Unit: 1 00:09:39.415 Fused Compare & Write: Not Supported 00:09:39.415 Scatter-Gather List 00:09:39.415 SGL Command Set: Supported 00:09:39.415 SGL Keyed: Not Supported 00:09:39.415 SGL Bit Bucket Descriptor: Not Supported 00:09:39.415 SGL Metadata Pointer: Not Supported 00:09:39.415 Oversized SGL: Not Supported 00:09:39.415 SGL Metadata Address: Not Supported 00:09:39.415 SGL Offset: Not Supported 00:09:39.415 Transport SGL Data Block: Not Supported 00:09:39.415 Replay Protected Memory Block: Not Supported 00:09:39.415 00:09:39.415 Firmware Slot Information 00:09:39.415 ========================= 00:09:39.415 Active slot: 1 00:09:39.415 Slot 1 Firmware Revision: 1.0 00:09:39.415 00:09:39.415 00:09:39.415 Commands Supported and Effects 00:09:39.415 ============================== 00:09:39.415 Admin Commands 00:09:39.415 -------------- 00:09:39.415 Delete I/O Submission Queue (00h): Supported 00:09:39.415 Create I/O Submission Queue (01h): Supported 00:09:39.415 Get Log Page (02h): Supported 00:09:39.415 Delete I/O Completion Queue (04h): Supported 00:09:39.415 Create I/O Completion Queue (05h): Supported 00:09:39.415 Identify (06h): Supported 00:09:39.415 Abort (08h): Supported 00:09:39.415 Set Features (09h): Supported 00:09:39.415 Get Features (0Ah): Supported 00:09:39.415 Asynchronous Event Request (0Ch): Supported 00:09:39.415 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:39.415 Directive Send (19h): Supported 00:09:39.415 Directive Receive (1Ah): Supported 00:09:39.415 Virtualization Management (1Ch): Supported 00:09:39.415 Doorbell Buffer Config (7Ch): Supported 00:09:39.415 Format NVM (80h): Supported LBA-Change 00:09:39.415 I/O Commands 00:09:39.415 ------------ 00:09:39.415 Flush (00h): Supported LBA-Change 00:09:39.415 Write (01h): Supported LBA-Change 00:09:39.415 Read (02h): Supported 00:09:39.415 Compare (05h): Supported 00:09:39.415 Write Zeroes (08h): Supported LBA-Change 00:09:39.415 Dataset Management (09h): Supported LBA-Change 00:09:39.415 Unknown (0Ch): Supported 00:09:39.415 Unknown (12h): Supported 00:09:39.415 Copy (19h): Supported LBA-Change 00:09:39.415 Unknown (1Dh): Supported LBA-Change 00:09:39.415 00:09:39.415 Error Log 00:09:39.415 ========= 00:09:39.415 00:09:39.415 Arbitration 00:09:39.415 =========== 00:09:39.415 Arbitration Burst: no limit 00:09:39.415 00:09:39.415 Power Management 00:09:39.415 ================ 00:09:39.415 Number of Power States: 1 00:09:39.415 Current Power State: Power State #0 00:09:39.415 Power State #0: 00:09:39.415 Max Power: 25.00 W 00:09:39.415 Non-Operational State: Operational 00:09:39.415 Entry Latency: 16 microseconds 00:09:39.415 Exit Latency: 4 microseconds 00:09:39.415 Relative Read Throughput: 0 00:09:39.415 Relative Read Latency: 0 00:09:39.415 Relative Write Throughput: 0 00:09:39.415 Relative Write Latency: 0 00:09:39.415 Idle Power: Not Reported 00:09:39.415 Active Power: Not Reported 00:09:39.415 Non-Operational Permissive Mode: Not Supported 00:09:39.415 00:09:39.415 Health Information 00:09:39.415 ================== 00:09:39.415 Critical Warnings: 00:09:39.415 Available Spare Space: OK 00:09:39.415 Temperature: [2024-11-26 18:54:10.514099] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64539 terminated unexpected 00:09:39.415 OK 00:09:39.415 Device Reliability: OK 00:09:39.415 Read Only: No 00:09:39.415 Volatile Memory Backup: OK 00:09:39.415 Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.415 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:39.415 Available Spare: 0% 00:09:39.415 Available Spare Threshold: 0% 00:09:39.415 Life Percentage Used: 0% 00:09:39.415 Data Units Read: 1017 00:09:39.415 Data Units Written: 890 00:09:39.415 Host Read Commands: 49537 00:09:39.415 Host Write Commands: 48427 00:09:39.415 Controller Busy Time: 0 minutes 00:09:39.415 Power Cycles: 0 00:09:39.415 Power On Hours: 0 hours 00:09:39.415 Unsafe Shutdowns: 0 00:09:39.415 Unrecoverable Media Errors: 0 00:09:39.415 Lifetime Error Log Entries: 0 00:09:39.415 Warning Temperature Time: 0 minutes 00:09:39.415 Critical Temperature Time: 0 minutes 00:09:39.415 00:09:39.415 Number of Queues 00:09:39.415 ================ 00:09:39.415 Number of I/O Submission Queues: 64 00:09:39.415 Number of I/O Completion Queues: 64 00:09:39.415 00:09:39.415 ZNS Specific Controller Data 00:09:39.415 ============================ 00:09:39.415 Zone Append Size Limit: 0 00:09:39.415 00:09:39.415 00:09:39.415 Active Namespaces 00:09:39.415 ================= 00:09:39.415 Namespace ID:1 00:09:39.415 Error Recovery Timeout: Unlimited 00:09:39.415 Command Set Identifier: NVM (00h) 00:09:39.415 Deallocate: Supported 00:09:39.415 Deallocated/Unwritten Error: Supported 00:09:39.415 Deallocated Read Value: All 0x00 00:09:39.415 Deallocate in Write Zeroes: Not Supported 00:09:39.415 Deallocated Guard Field: 0xFFFF 00:09:39.415 Flush: Supported 00:09:39.415 Reservation: Not Supported 00:09:39.415 Namespace Sharing Capabilities: Private 00:09:39.415 Size (in LBAs): 1310720 (5GiB) 00:09:39.415 Capacity (in LBAs): 1310720 (5GiB) 00:09:39.415 Utilization (in LBAs): 1310720 (5GiB) 00:09:39.415 Thin Provisioning: Not Supported 00:09:39.415 Per-NS Atomic Units: No 00:09:39.415 Maximum Single Source Range Length: 128 00:09:39.415 Maximum Copy Length: 128 00:09:39.415 Maximum Source Range Count: 128 00:09:39.415 NGUID/EUI64 Never Reused: No 00:09:39.415 Namespace Write Protected: No 00:09:39.415 Number of LBA Formats: 8 00:09:39.415 Current LBA Format: LBA Format #04 00:09:39.415 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.415 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.415 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.415 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.415 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.415 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.415 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.415 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.415 00:09:39.415 NVM Specific Namespace Data 00:09:39.416 =========================== 00:09:39.416 Logical Block Storage Tag Mask: 0 00:09:39.416 Protection Information Capabilities: 00:09:39.416 16b Guard Protection Information Storage Tag Support: No 00:09:39.416 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.416 Storage Tag Check Read Support: No 00:09:39.416 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.416 ===================================================== 00:09:39.416 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:39.416 ===================================================== 00:09:39.416 Controller Capabilities/Features 00:09:39.416 ================================ 00:09:39.416 Vendor ID: 1b36 00:09:39.416 Subsystem Vendor ID: 1af4 00:09:39.416 Serial Number: 12343 00:09:39.416 Model Number: QEMU NVMe Ctrl 00:09:39.416 Firmware Version: 8.0.0 00:09:39.416 Recommended Arb Burst: 6 00:09:39.416 IEEE OUI Identifier: 00 54 52 00:09:39.416 Multi-path I/O 00:09:39.416 May have multiple subsystem ports: No 00:09:39.416 May have multiple controllers: Yes 00:09:39.416 Associated with SR-IOV VF: No 00:09:39.416 Max Data Transfer Size: 524288 00:09:39.416 Max Number of Namespaces: 256 00:09:39.416 Max Number of I/O Queues: 64 00:09:39.416 NVMe Specification Version (VS): 1.4 00:09:39.416 NVMe Specification Version (Identify): 1.4 00:09:39.416 Maximum Queue Entries: 2048 00:09:39.416 Contiguous Queues Required: Yes 00:09:39.416 Arbitration Mechanisms Supported 00:09:39.416 Weighted Round Robin: Not Supported 00:09:39.416 Vendor Specific: Not Supported 00:09:39.416 Reset Timeout: 7500 ms 00:09:39.416 Doorbell Stride: 4 bytes 00:09:39.416 NVM Subsystem Reset: Not Supported 00:09:39.416 Command Sets Supported 00:09:39.416 NVM Command Set: Supported 00:09:39.416 Boot Partition: Not Supported 00:09:39.416 Memory Page Size Minimum: 4096 bytes 00:09:39.416 Memory Page Size Maximum: 65536 bytes 00:09:39.416 Persistent Memory Region: Not Supported 00:09:39.416 Optional Asynchronous Events Supported 00:09:39.416 Namespace Attribute Notices: Supported 00:09:39.416 Firmware Activation Notices: Not Supported 00:09:39.416 ANA Change Notices: Not Supported 00:09:39.416 PLE Aggregate Log Change Notices: Not Supported 00:09:39.416 LBA Status Info Alert Notices: Not Supported 00:09:39.416 EGE Aggregate Log Change Notices: Not Supported 00:09:39.416 Normal NVM Subsystem Shutdown event: Not Supported 00:09:39.416 Zone Descriptor Change Notices: Not Supported 00:09:39.416 Discovery Log Change Notices: Not Supported 00:09:39.416 Controller Attributes 00:09:39.416 128-bit Host Identifier: Not Supported 00:09:39.416 Non-Operational Permissive Mode: Not Supported 00:09:39.416 NVM Sets: Not Supported 00:09:39.416 Read Recovery Levels: Not Supported 00:09:39.416 Endurance Groups: Supported 00:09:39.416 Predictable Latency Mode: Not Supported 00:09:39.416 Traffic Based Keep ALive: Not Supported 00:09:39.416 Namespace Granularity: Not Supported 00:09:39.416 SQ Associations: Not Supported 00:09:39.416 UUID List: Not Supported 00:09:39.416 Multi-Domain Subsystem: Not Supported 00:09:39.416 Fixed Capacity Management: Not Supported 00:09:39.416 Variable Capacity Management: Not Supported 00:09:39.416 Delete Endurance Group: Not Supported 00:09:39.416 Delete NVM Set: Not Supported 00:09:39.416 Extended LBA Formats Supported: Supported 00:09:39.416 Flexible Data Placement Supported: Supported 00:09:39.416 00:09:39.416 Controller Memory Buffer Support 00:09:39.416 ================================ 00:09:39.416 Supported: No 00:09:39.416 00:09:39.416 Persistent Memory Region Support 00:09:39.416 ================================ 00:09:39.416 Supported: No 00:09:39.416 00:09:39.416 Admin Command Set Attributes 00:09:39.416 ============================ 00:09:39.416 Security Send/Receive: Not Supported 00:09:39.416 Format NVM: Supported 00:09:39.416 Firmware Activate/Download: Not Supported 00:09:39.416 Namespace Management: Supported 00:09:39.416 Device Self-Test: Not Supported 00:09:39.416 Directives: Supported 00:09:39.416 NVMe-MI: Not Supported 00:09:39.416 Virtualization Management: Not Supported 00:09:39.416 Doorbell Buffer Config: Supported 00:09:39.416 Get LBA Status Capability: Not Supported 00:09:39.416 Command & Feature Lockdown Capability: Not Supported 00:09:39.416 Abort Command Limit: 4 00:09:39.416 Async Event Request Limit: 4 00:09:39.416 Number of Firmware Slots: N/A 00:09:39.416 Firmware Slot 1 Read-Only: N/A 00:09:39.416 Firmware Activation Without Reset: N/A 00:09:39.416 Multiple Update Detection Support: N/A 00:09:39.416 Firmware Update Granularity: No Information Provided 00:09:39.416 Per-Namespace SMART Log: Yes 00:09:39.416 Asymmetric Namespace Access Log Page: Not Supported 00:09:39.416 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:39.416 Command Effects Log Page: Supported 00:09:39.416 Get Log Page Extended Data: Supported 00:09:39.416 Telemetry Log Pages: Not Supported 00:09:39.416 Persistent Event Log Pages: Not Supported 00:09:39.416 Supported Log Pages Log Page: May Support 00:09:39.416 Commands Supported & Effects Log Page: Not Supported 00:09:39.416 Feature Identifiers & Effects Log Page:May Support 00:09:39.416 NVMe-MI Commands & Effects Log Page: May Support 00:09:39.416 Data Area 4 for Telemetry Log: Not Supported 00:09:39.416 Error Log Page Entries Supported: 1 00:09:39.416 Keep Alive: Not Supported 00:09:39.416 00:09:39.416 NVM Command Set Attributes 00:09:39.416 ========================== 00:09:39.416 Submission Queue Entry Size 00:09:39.416 Max: 64 00:09:39.416 Min: 64 00:09:39.416 Completion Queue Entry Size 00:09:39.416 Max: 16 00:09:39.416 Min: 16 00:09:39.416 Number of Namespaces: 256 00:09:39.416 Compare Command: Supported 00:09:39.416 Write Uncorrectable Command: Not Supported 00:09:39.416 Dataset Management Command: Supported 00:09:39.416 Write Zeroes Command: Supported 00:09:39.416 Set Features Save Field: Supported 00:09:39.416 Reservations: Not Supported 00:09:39.416 Timestamp: Supported 00:09:39.416 Copy: Supported 00:09:39.416 Volatile Write Cache: Present 00:09:39.416 Atomic Write Unit (Normal): 1 00:09:39.416 Atomic Write Unit (PFail): 1 00:09:39.416 Atomic Compare & Write Unit: 1 00:09:39.416 Fused Compare & Write: Not Supported 00:09:39.416 Scatter-Gather List 00:09:39.416 SGL Command Set: Supported 00:09:39.416 SGL Keyed: Not Supported 00:09:39.416 SGL Bit Bucket Descriptor: Not Supported 00:09:39.416 SGL Metadata Pointer: Not Supported 00:09:39.416 Oversized SGL: Not Supported 00:09:39.416 SGL Metadata Address: Not Supported 00:09:39.416 SGL Offset: Not Supported 00:09:39.416 Transport SGL Data Block: Not Supported 00:09:39.416 Replay Protected Memory Block: Not Supported 00:09:39.416 00:09:39.416 Firmware Slot Information 00:09:39.416 ========================= 00:09:39.416 Active slot: 1 00:09:39.416 Slot 1 Firmware Revision: 1.0 00:09:39.416 00:09:39.416 00:09:39.416 Commands Supported and Effects 00:09:39.416 ============================== 00:09:39.416 Admin Commands 00:09:39.416 -------------- 00:09:39.416 Delete I/O Submission Queue (00h): Supported 00:09:39.416 Create I/O Submission Queue (01h): Supported 00:09:39.416 Get Log Page (02h): Supported 00:09:39.416 Delete I/O Completion Queue (04h): Supported 00:09:39.416 Create I/O Completion Queue (05h): Supported 00:09:39.416 Identify (06h): Supported 00:09:39.416 Abort (08h): Supported 00:09:39.416 Set Features (09h): Supported 00:09:39.416 Get Features (0Ah): Supported 00:09:39.416 Asynchronous Event Request (0Ch): Supported 00:09:39.416 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:39.416 Directive Send (19h): Supported 00:09:39.416 Directive Receive (1Ah): Supported 00:09:39.416 Virtualization Management (1Ch): Supported 00:09:39.417 Doorbell Buffer Config (7Ch): Supported 00:09:39.417 Format NVM (80h): Supported LBA-Change 00:09:39.417 I/O Commands 00:09:39.417 ------------ 00:09:39.417 Flush (00h): Supported LBA-Change 00:09:39.417 Write (01h): Supported LBA-Change 00:09:39.417 Read (02h): Supported 00:09:39.417 Compare (05h): Supported 00:09:39.417 Write Zeroes (08h): Supported LBA-Change 00:09:39.417 Dataset Management (09h): Supported LBA-Change 00:09:39.417 Unknown (0Ch): Supported 00:09:39.417 Unknown (12h): Supported 00:09:39.417 Copy (19h): Supported LBA-Change 00:09:39.417 Unknown (1Dh): Supported LBA-Change 00:09:39.417 00:09:39.417 Error Log 00:09:39.417 ========= 00:09:39.417 00:09:39.417 Arbitration 00:09:39.417 =========== 00:09:39.417 Arbitration Burst: no limit 00:09:39.417 00:09:39.417 Power Management 00:09:39.417 ================ 00:09:39.417 Number of Power States: 1 00:09:39.417 Current Power State: Power State #0 00:09:39.417 Power State #0: 00:09:39.417 Max Power: 25.00 W 00:09:39.417 Non-Operational State: Operational 00:09:39.417 Entry Latency: 16 microseconds 00:09:39.417 Exit Latency: 4 microseconds 00:09:39.417 Relative Read Throughput: 0 00:09:39.417 Relative Read Latency: 0 00:09:39.417 Relative Write Throughput: 0 00:09:39.417 Relative Write Latency: 0 00:09:39.417 Idle Power: Not Reported 00:09:39.417 Active Power: Not Reported 00:09:39.417 Non-Operational Permissive Mode: Not Supported 00:09:39.417 00:09:39.417 Health Information 00:09:39.417 ================== 00:09:39.417 Critical Warnings: 00:09:39.417 Available Spare Space: OK 00:09:39.417 Temperature: OK 00:09:39.417 Device Reliability: OK 00:09:39.417 Read Only: No 00:09:39.417 Volatile Memory Backup: OK 00:09:39.417 Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.417 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:39.417 Available Spare: 0% 00:09:39.417 Available Spare Threshold: 0% 00:09:39.417 Life Percentage Used: 0% 00:09:39.417 Data Units Read: 779 00:09:39.417 Data Units Written: 708 00:09:39.417 Host Read Commands: 34960 00:09:39.417 Host Write Commands: 34383 00:09:39.417 Controller Busy Time: 0 minutes 00:09:39.417 Power Cycles: 0 00:09:39.417 Power On Hours: 0 hours 00:09:39.417 Unsafe Shutdowns: 0 00:09:39.417 Unrecoverable Media Errors: 0 00:09:39.417 Lifetime Error Log Entries: 0 00:09:39.417 Warning Temperature Time: 0 minutes 00:09:39.417 Critical Temperature Time: 0 minutes 00:09:39.417 00:09:39.417 Number of Queues 00:09:39.417 ================ 00:09:39.417 Number of I/O Submission Queues: 64 00:09:39.417 Number of I/O Completion Queues: 64 00:09:39.417 00:09:39.417 ZNS Specific Controller Data 00:09:39.417 ============================ 00:09:39.417 Zone Append Size Limit: 0 00:09:39.417 00:09:39.417 00:09:39.417 Active Namespaces 00:09:39.417 ================= 00:09:39.417 Namespace ID:1 00:09:39.417 Error Recovery Timeout: Unlimited 00:09:39.417 Command Set Identifier: NVM (00h) 00:09:39.417 Deallocate: Supported 00:09:39.417 Deallocated/Unwritten Error: Supported 00:09:39.417 Deallocated Read Value: All 0x00 00:09:39.417 Deallocate in Write Zeroes: Not Supported 00:09:39.417 Deallocated Guard Field: 0xFFFF 00:09:39.417 Flush: Supported 00:09:39.417 Reservation: Not Supported 00:09:39.417 Namespace Sharing Capabilities: Multiple Controllers 00:09:39.417 Size (in LBAs): 262144 (1GiB) 00:09:39.417 Capacity (in LBAs): 262144 (1GiB) 00:09:39.417 Utilization (in LBAs): 262144 (1GiB) 00:09:39.417 Thin Provisioning: Not Supported 00:09:39.417 Per-NS Atomic Units: No 00:09:39.417 Maximum Single Source Range Length: 128 00:09:39.417 Maximum Copy Length: 128 00:09:39.417 Maximum Source Range Count: 128 00:09:39.417 NGUID/EUI64 Never Reused: No 00:09:39.417 Namespace Write Protected: No 00:09:39.417 Endurance group ID: 1 00:09:39.417 Number of LBA Formats: 8 00:09:39.417 Current LBA Format: LBA Format #04 00:09:39.417 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.417 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.417 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.417 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.417 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.417 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.417 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.417 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.417 00:09:39.417 Get Feature FDP: 00:09:39.417 ================ 00:09:39.417 Enabled: Yes 00:09:39.417 FDP configuration index: 0 00:09:39.417 00:09:39.417 FDP configurations log page 00:09:39.417 =========================== 00:09:39.417 Number of FDP configurations: 1 00:09:39.417 Version: 0 00:09:39.417 Size: 112 00:09:39.417 FDP Configuration Descriptor: 0 00:09:39.417 Descriptor Size: 96 00:09:39.417 Reclaim Group Identifier format: 2 00:09:39.417 FDP Volatile Write Cache: Not Present 00:09:39.417 FDP Configuration: Valid 00:09:39.417 Vendor Specific Size: 0 00:09:39.417 Number of Reclaim Groups: 2 00:09:39.417 Number of Recalim Unit Handles: 8 00:09:39.417 Max Placement Identifiers: 128 00:09:39.417 Number of Namespaces Suppprted: 256 00:09:39.417 Reclaim unit Nominal Size: 6000000 bytes 00:09:39.417 Estimated Reclaim Unit Time Limit: Not Reported 00:09:39.417 RUH Desc #000: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #001: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #002: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #003: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #004: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #005: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #006: RUH Type: Initially Isolated 00:09:39.417 RUH Desc #007: RUH Type: Initially Isolated 00:09:39.417 00:09:39.417 FDP reclaim unit handle usage log page 00:09:39.417 ====================================== 00:09:39.417 Number of Reclaim Unit Handles: 8 00:09:39.417 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:39.417 RUH Usage Desc #001: RUH Attributes: Unused 00:09:39.417 RUH Usage Desc #002: RUH Attributes: Unused 00:09:39.417 RUH Usage Desc #003: RUH Attributes: Unused 00:09:39.417 RUH Usage Desc #004: RUH Attributes: Unused 00:09:39.417 RUH Usage Desc #005: RUH Attributes: Unused 00:09:39.417 RUH Usage Desc #006: RUH Attributes: Unused 00:09:39.417 RUH Usage Desc #007: RUH Attributes: Unused 00:09:39.417 00:09:39.417 FDP statistics log page 00:09:39.417 ======================= 00:09:39.417 Host bytes with metadata written: 436510720 00:09:39.417 Medi[2024-11-26 18:54:10.515879] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64539 terminated unexpected 00:09:39.417 a bytes with metadata written: 436576256 00:09:39.417 Media bytes erased: 0 00:09:39.417 00:09:39.417 FDP events log page 00:09:39.417 =================== 00:09:39.417 Number of FDP events: 0 00:09:39.417 00:09:39.417 NVM Specific Namespace Data 00:09:39.417 =========================== 00:09:39.417 Logical Block Storage Tag Mask: 0 00:09:39.417 Protection Information Capabilities: 00:09:39.417 16b Guard Protection Information Storage Tag Support: No 00:09:39.417 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.417 Storage Tag Check Read Support: No 00:09:39.417 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.417 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.417 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.417 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.418 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.418 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.418 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.418 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.418 ===================================================== 00:09:39.418 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:39.418 ===================================================== 00:09:39.418 Controller Capabilities/Features 00:09:39.418 ================================ 00:09:39.418 Vendor ID: 1b36 00:09:39.418 Subsystem Vendor ID: 1af4 00:09:39.418 Serial Number: 12342 00:09:39.418 Model Number: QEMU NVMe Ctrl 00:09:39.418 Firmware Version: 8.0.0 00:09:39.418 Recommended Arb Burst: 6 00:09:39.418 IEEE OUI Identifier: 00 54 52 00:09:39.418 Multi-path I/O 00:09:39.418 May have multiple subsystem ports: No 00:09:39.418 May have multiple controllers: No 00:09:39.418 Associated with SR-IOV VF: No 00:09:39.418 Max Data Transfer Size: 524288 00:09:39.418 Max Number of Namespaces: 256 00:09:39.418 Max Number of I/O Queues: 64 00:09:39.418 NVMe Specification Version (VS): 1.4 00:09:39.418 NVMe Specification Version (Identify): 1.4 00:09:39.418 Maximum Queue Entries: 2048 00:09:39.418 Contiguous Queues Required: Yes 00:09:39.418 Arbitration Mechanisms Supported 00:09:39.418 Weighted Round Robin: Not Supported 00:09:39.418 Vendor Specific: Not Supported 00:09:39.418 Reset Timeout: 7500 ms 00:09:39.418 Doorbell Stride: 4 bytes 00:09:39.418 NVM Subsystem Reset: Not Supported 00:09:39.418 Command Sets Supported 00:09:39.418 NVM Command Set: Supported 00:09:39.418 Boot Partition: Not Supported 00:09:39.418 Memory Page Size Minimum: 4096 bytes 00:09:39.418 Memory Page Size Maximum: 65536 bytes 00:09:39.418 Persistent Memory Region: Not Supported 00:09:39.418 Optional Asynchronous Events Supported 00:09:39.418 Namespace Attribute Notices: Supported 00:09:39.418 Firmware Activation Notices: Not Supported 00:09:39.418 ANA Change Notices: Not Supported 00:09:39.418 PLE Aggregate Log Change Notices: Not Supported 00:09:39.418 LBA Status Info Alert Notices: Not Supported 00:09:39.418 EGE Aggregate Log Change Notices: Not Supported 00:09:39.418 Normal NVM Subsystem Shutdown event: Not Supported 00:09:39.418 Zone Descriptor Change Notices: Not Supported 00:09:39.418 Discovery Log Change Notices: Not Supported 00:09:39.418 Controller Attributes 00:09:39.418 128-bit Host Identifier: Not Supported 00:09:39.418 Non-Operational Permissive Mode: Not Supported 00:09:39.418 NVM Sets: Not Supported 00:09:39.418 Read Recovery Levels: Not Supported 00:09:39.418 Endurance Groups: Not Supported 00:09:39.418 Predictable Latency Mode: Not Supported 00:09:39.418 Traffic Based Keep ALive: Not Supported 00:09:39.418 Namespace Granularity: Not Supported 00:09:39.418 SQ Associations: Not Supported 00:09:39.418 UUID List: Not Supported 00:09:39.418 Multi-Domain Subsystem: Not Supported 00:09:39.418 Fixed Capacity Management: Not Supported 00:09:39.418 Variable Capacity Management: Not Supported 00:09:39.418 Delete Endurance Group: Not Supported 00:09:39.418 Delete NVM Set: Not Supported 00:09:39.418 Extended LBA Formats Supported: Supported 00:09:39.418 Flexible Data Placement Supported: Not Supported 00:09:39.418 00:09:39.418 Controller Memory Buffer Support 00:09:39.418 ================================ 00:09:39.418 Supported: No 00:09:39.418 00:09:39.418 Persistent Memory Region Support 00:09:39.418 ================================ 00:09:39.418 Supported: No 00:09:39.418 00:09:39.418 Admin Command Set Attributes 00:09:39.418 ============================ 00:09:39.418 Security Send/Receive: Not Supported 00:09:39.418 Format NVM: Supported 00:09:39.418 Firmware Activate/Download: Not Supported 00:09:39.418 Namespace Management: Supported 00:09:39.418 Device Self-Test: Not Supported 00:09:39.418 Directives: Supported 00:09:39.418 NVMe-MI: Not Supported 00:09:39.418 Virtualization Management: Not Supported 00:09:39.418 Doorbell Buffer Config: Supported 00:09:39.418 Get LBA Status Capability: Not Supported 00:09:39.418 Command & Feature Lockdown Capability: Not Supported 00:09:39.418 Abort Command Limit: 4 00:09:39.418 Async Event Request Limit: 4 00:09:39.418 Number of Firmware Slots: N/A 00:09:39.418 Firmware Slot 1 Read-Only: N/A 00:09:39.418 Firmware Activation Without Reset: N/A 00:09:39.418 Multiple Update Detection Support: N/A 00:09:39.418 Firmware Update Granularity: No Information Provided 00:09:39.418 Per-Namespace SMART Log: Yes 00:09:39.418 Asymmetric Namespace Access Log Page: Not Supported 00:09:39.418 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:39.418 Command Effects Log Page: Supported 00:09:39.418 Get Log Page Extended Data: Supported 00:09:39.418 Telemetry Log Pages: Not Supported 00:09:39.418 Persistent Event Log Pages: Not Supported 00:09:39.418 Supported Log Pages Log Page: May Support 00:09:39.418 Commands Supported & Effects Log Page: Not Supported 00:09:39.418 Feature Identifiers & Effects Log Page:May Support 00:09:39.418 NVMe-MI Commands & Effects Log Page: May Support 00:09:39.418 Data Area 4 for Telemetry Log: Not Supported 00:09:39.418 Error Log Page Entries Supported: 1 00:09:39.418 Keep Alive: Not Supported 00:09:39.418 00:09:39.418 NVM Command Set Attributes 00:09:39.418 ========================== 00:09:39.418 Submission Queue Entry Size 00:09:39.418 Max: 64 00:09:39.418 Min: 64 00:09:39.418 Completion Queue Entry Size 00:09:39.418 Max: 16 00:09:39.418 Min: 16 00:09:39.418 Number of Namespaces: 256 00:09:39.418 Compare Command: Supported 00:09:39.418 Write Uncorrectable Command: Not Supported 00:09:39.418 Dataset Management Command: Supported 00:09:39.418 Write Zeroes Command: Supported 00:09:39.418 Set Features Save Field: Supported 00:09:39.418 Reservations: Not Supported 00:09:39.418 Timestamp: Supported 00:09:39.418 Copy: Supported 00:09:39.418 Volatile Write Cache: Present 00:09:39.418 Atomic Write Unit (Normal): 1 00:09:39.418 Atomic Write Unit (PFail): 1 00:09:39.418 Atomic Compare & Write Unit: 1 00:09:39.418 Fused Compare & Write: Not Supported 00:09:39.418 Scatter-Gather List 00:09:39.418 SGL Command Set: Supported 00:09:39.418 SGL Keyed: Not Supported 00:09:39.418 SGL Bit Bucket Descriptor: Not Supported 00:09:39.418 SGL Metadata Pointer: Not Supported 00:09:39.418 Oversized SGL: Not Supported 00:09:39.418 SGL Metadata Address: Not Supported 00:09:39.418 SGL Offset: Not Supported 00:09:39.418 Transport SGL Data Block: Not Supported 00:09:39.418 Replay Protected Memory Block: Not Supported 00:09:39.418 00:09:39.418 Firmware Slot Information 00:09:39.418 ========================= 00:09:39.418 Active slot: 1 00:09:39.418 Slot 1 Firmware Revision: 1.0 00:09:39.418 00:09:39.418 00:09:39.418 Commands Supported and Effects 00:09:39.418 ============================== 00:09:39.418 Admin Commands 00:09:39.418 -------------- 00:09:39.418 Delete I/O Submission Queue (00h): Supported 00:09:39.418 Create I/O Submission Queue (01h): Supported 00:09:39.418 Get Log Page (02h): Supported 00:09:39.418 Delete I/O Completion Queue (04h): Supported 00:09:39.418 Create I/O Completion Queue (05h): Supported 00:09:39.418 Identify (06h): Supported 00:09:39.418 Abort (08h): Supported 00:09:39.418 Set Features (09h): Supported 00:09:39.418 Get Features (0Ah): Supported 00:09:39.418 Asynchronous Event Request (0Ch): Supported 00:09:39.418 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:39.419 Directive Send (19h): Supported 00:09:39.419 Directive Receive (1Ah): Supported 00:09:39.419 Virtualization Management (1Ch): Supported 00:09:39.419 Doorbell Buffer Config (7Ch): Supported 00:09:39.419 Format NVM (80h): Supported LBA-Change 00:09:39.419 I/O Commands 00:09:39.419 ------------ 00:09:39.419 Flush (00h): Supported LBA-Change 00:09:39.419 Write (01h): Supported LBA-Change 00:09:39.419 Read (02h): Supported 00:09:39.419 Compare (05h): Supported 00:09:39.419 Write Zeroes (08h): Supported LBA-Change 00:09:39.419 Dataset Management (09h): Supported LBA-Change 00:09:39.419 Unknown (0Ch): Supported 00:09:39.419 Unknown (12h): Supported 00:09:39.419 Copy (19h): Supported LBA-Change 00:09:39.419 Unknown (1Dh): Supported LBA-Change 00:09:39.419 00:09:39.419 Error Log 00:09:39.419 ========= 00:09:39.419 00:09:39.419 Arbitration 00:09:39.419 =========== 00:09:39.419 Arbitration Burst: no limit 00:09:39.419 00:09:39.419 Power Management 00:09:39.419 ================ 00:09:39.419 Number of Power States: 1 00:09:39.419 Current Power State: Power State #0 00:09:39.419 Power State #0: 00:09:39.419 Max Power: 25.00 W 00:09:39.419 Non-Operational State: Operational 00:09:39.419 Entry Latency: 16 microseconds 00:09:39.419 Exit Latency: 4 microseconds 00:09:39.419 Relative Read Throughput: 0 00:09:39.419 Relative Read Latency: 0 00:09:39.419 Relative Write Throughput: 0 00:09:39.419 Relative Write Latency: 0 00:09:39.419 Idle Power: Not Reported 00:09:39.419 Active Power: Not Reported 00:09:39.419 Non-Operational Permissive Mode: Not Supported 00:09:39.419 00:09:39.419 Health Information 00:09:39.419 ================== 00:09:39.419 Critical Warnings: 00:09:39.419 Available Spare Space: OK 00:09:39.419 Temperature: OK 00:09:39.419 Device Reliability: OK 00:09:39.419 Read Only: No 00:09:39.419 Volatile Memory Backup: OK 00:09:39.419 Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.419 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:39.419 Available Spare: 0% 00:09:39.419 Available Spare Threshold: 0% 00:09:39.419 Life Percentage Used: 0% 00:09:39.419 Data Units Read: 2088 00:09:39.419 Data Units Written: 1875 00:09:39.419 Host Read Commands: 102414 00:09:39.419 Host Write Commands: 100683 00:09:39.419 Controller Busy Time: 0 minutes 00:09:39.419 Power Cycles: 0 00:09:39.419 Power On Hours: 0 hours 00:09:39.419 Unsafe Shutdowns: 0 00:09:39.419 Unrecoverable Media Errors: 0 00:09:39.419 Lifetime Error Log Entries: 0 00:09:39.419 Warning Temperature Time: 0 minutes 00:09:39.419 Critical Temperature Time: 0 minutes 00:09:39.419 00:09:39.419 Number of Queues 00:09:39.419 ================ 00:09:39.419 Number of I/O Submission Queues: 64 00:09:39.419 Number of I/O Completion Queues: 64 00:09:39.419 00:09:39.419 ZNS Specific Controller Data 00:09:39.419 ============================ 00:09:39.419 Zone Append Size Limit: 0 00:09:39.419 00:09:39.419 00:09:39.419 Active Namespaces 00:09:39.419 ================= 00:09:39.419 Namespace ID:1 00:09:39.419 Error Recovery Timeout: Unlimited 00:09:39.419 Command Set Identifier: NVM (00h) 00:09:39.419 Deallocate: Supported 00:09:39.419 Deallocated/Unwritten Error: Supported 00:09:39.419 Deallocated Read Value: All 0x00 00:09:39.419 Deallocate in Write Zeroes: Not Supported 00:09:39.419 Deallocated Guard Field: 0xFFFF 00:09:39.419 Flush: Supported 00:09:39.419 Reservation: Not Supported 00:09:39.419 Namespace Sharing Capabilities: Private 00:09:39.419 Size (in LBAs): 1048576 (4GiB) 00:09:39.419 Capacity (in LBAs): 1048576 (4GiB) 00:09:39.419 Utilization (in LBAs): 1048576 (4GiB) 00:09:39.419 Thin Provisioning: Not Supported 00:09:39.419 Per-NS Atomic Units: No 00:09:39.419 Maximum Single Source Range Length: 128 00:09:39.419 Maximum Copy Length: 128 00:09:39.419 Maximum Source Range Count: 128 00:09:39.419 NGUID/EUI64 Never Reused: No 00:09:39.419 Namespace Write Protected: No 00:09:39.419 Number of LBA Formats: 8 00:09:39.419 Current LBA Format: LBA Format #04 00:09:39.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.419 00:09:39.419 NVM Specific Namespace Data 00:09:39.419 =========================== 00:09:39.419 Logical Block Storage Tag Mask: 0 00:09:39.419 Protection Information Capabilities: 00:09:39.419 16b Guard Protection Information Storage Tag Support: No 00:09:39.419 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.419 Storage Tag Check Read Support: No 00:09:39.419 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Namespace ID:2 00:09:39.419 Error Recovery Timeout: Unlimited 00:09:39.419 Command Set Identifier: NVM (00h) 00:09:39.419 Deallocate: Supported 00:09:39.419 Deallocated/Unwritten Error: Supported 00:09:39.419 Deallocated Read Value: All 0x00 00:09:39.419 Deallocate in Write Zeroes: Not Supported 00:09:39.419 Deallocated Guard Field: 0xFFFF 00:09:39.419 Flush: Supported 00:09:39.419 Reservation: Not Supported 00:09:39.419 Namespace Sharing Capabilities: Private 00:09:39.419 Size (in LBAs): 1048576 (4GiB) 00:09:39.419 Capacity (in LBAs): 1048576 (4GiB) 00:09:39.419 Utilization (in LBAs): 1048576 (4GiB) 00:09:39.419 Thin Provisioning: Not Supported 00:09:39.419 Per-NS Atomic Units: No 00:09:39.419 Maximum Single Source Range Length: 128 00:09:39.419 Maximum Copy Length: 128 00:09:39.419 Maximum Source Range Count: 128 00:09:39.419 NGUID/EUI64 Never Reused: No 00:09:39.419 Namespace Write Protected: No 00:09:39.419 Number of LBA Formats: 8 00:09:39.419 Current LBA Format: LBA Format #04 00:09:39.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.419 00:09:39.419 NVM Specific Namespace Data 00:09:39.419 =========================== 00:09:39.419 Logical Block Storage Tag Mask: 0 00:09:39.419 Protection Information Capabilities: 00:09:39.419 16b Guard Protection Information Storage Tag Support: No 00:09:39.419 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.419 Storage Tag Check Read Support: No 00:09:39.419 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.419 Namespace ID:3 00:09:39.419 Error Recovery Timeout: Unlimited 00:09:39.419 Command Set Identifier: NVM (00h) 00:09:39.419 Deallocate: Supported 00:09:39.419 Deallocated/Unwritten Error: Supported 00:09:39.419 Deallocated Read Value: All 0x00 00:09:39.419 Deallocate in Write Zeroes: Not Supported 00:09:39.420 Deallocated Guard Field: 0xFFFF 00:09:39.420 Flush: Supported 00:09:39.420 Reservation: Not Supported 00:09:39.420 Namespace Sharing Capabilities: Private 00:09:39.420 Size (in LBAs): 1048576 (4GiB) 00:09:39.420 Capacity (in LBAs): 1048576 (4GiB) 00:09:39.420 Utilization (in LBAs): 1048576 (4GiB) 00:09:39.420 Thin Provisioning: Not Supported 00:09:39.420 Per-NS Atomic Units: No 00:09:39.420 Maximum Single Source Range Length: 128 00:09:39.420 Maximum Copy Length: 128 00:09:39.420 Maximum Source Range Count: 128 00:09:39.420 NGUID/EUI64 Never Reused: No 00:09:39.420 Namespace Write Protected: No 00:09:39.420 Number of LBA Formats: 8 00:09:39.420 Current LBA Format: LBA Format #04 00:09:39.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.420 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.420 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.420 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.420 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.420 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.420 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.420 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.420 00:09:39.420 NVM Specific Namespace Data 00:09:39.420 =========================== 00:09:39.420 Logical Block Storage Tag Mask: 0 00:09:39.420 Protection Information Capabilities: 00:09:39.420 16b Guard Protection Information Storage Tag Support: No 00:09:39.420 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.420 Storage Tag Check Read Support: No 00:09:39.420 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.420 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:39.420 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:39.679 ===================================================== 00:09:39.679 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:39.679 ===================================================== 00:09:39.679 Controller Capabilities/Features 00:09:39.679 ================================ 00:09:39.679 Vendor ID: 1b36 00:09:39.679 Subsystem Vendor ID: 1af4 00:09:39.679 Serial Number: 12340 00:09:39.679 Model Number: QEMU NVMe Ctrl 00:09:39.679 Firmware Version: 8.0.0 00:09:39.679 Recommended Arb Burst: 6 00:09:39.679 IEEE OUI Identifier: 00 54 52 00:09:39.679 Multi-path I/O 00:09:39.679 May have multiple subsystem ports: No 00:09:39.679 May have multiple controllers: No 00:09:39.679 Associated with SR-IOV VF: No 00:09:39.679 Max Data Transfer Size: 524288 00:09:39.679 Max Number of Namespaces: 256 00:09:39.679 Max Number of I/O Queues: 64 00:09:39.679 NVMe Specification Version (VS): 1.4 00:09:39.679 NVMe Specification Version (Identify): 1.4 00:09:39.679 Maximum Queue Entries: 2048 00:09:39.679 Contiguous Queues Required: Yes 00:09:39.679 Arbitration Mechanisms Supported 00:09:39.679 Weighted Round Robin: Not Supported 00:09:39.679 Vendor Specific: Not Supported 00:09:39.679 Reset Timeout: 7500 ms 00:09:39.679 Doorbell Stride: 4 bytes 00:09:39.679 NVM Subsystem Reset: Not Supported 00:09:39.679 Command Sets Supported 00:09:39.679 NVM Command Set: Supported 00:09:39.679 Boot Partition: Not Supported 00:09:39.679 Memory Page Size Minimum: 4096 bytes 00:09:39.679 Memory Page Size Maximum: 65536 bytes 00:09:39.679 Persistent Memory Region: Not Supported 00:09:39.679 Optional Asynchronous Events Supported 00:09:39.679 Namespace Attribute Notices: Supported 00:09:39.679 Firmware Activation Notices: Not Supported 00:09:39.679 ANA Change Notices: Not Supported 00:09:39.679 PLE Aggregate Log Change Notices: Not Supported 00:09:39.679 LBA Status Info Alert Notices: Not Supported 00:09:39.679 EGE Aggregate Log Change Notices: Not Supported 00:09:39.679 Normal NVM Subsystem Shutdown event: Not Supported 00:09:39.679 Zone Descriptor Change Notices: Not Supported 00:09:39.679 Discovery Log Change Notices: Not Supported 00:09:39.679 Controller Attributes 00:09:39.679 128-bit Host Identifier: Not Supported 00:09:39.679 Non-Operational Permissive Mode: Not Supported 00:09:39.679 NVM Sets: Not Supported 00:09:39.679 Read Recovery Levels: Not Supported 00:09:39.679 Endurance Groups: Not Supported 00:09:39.679 Predictable Latency Mode: Not Supported 00:09:39.679 Traffic Based Keep ALive: Not Supported 00:09:39.679 Namespace Granularity: Not Supported 00:09:39.679 SQ Associations: Not Supported 00:09:39.679 UUID List: Not Supported 00:09:39.679 Multi-Domain Subsystem: Not Supported 00:09:39.679 Fixed Capacity Management: Not Supported 00:09:39.679 Variable Capacity Management: Not Supported 00:09:39.679 Delete Endurance Group: Not Supported 00:09:39.679 Delete NVM Set: Not Supported 00:09:39.679 Extended LBA Formats Supported: Supported 00:09:39.679 Flexible Data Placement Supported: Not Supported 00:09:39.679 00:09:39.679 Controller Memory Buffer Support 00:09:39.679 ================================ 00:09:39.679 Supported: No 00:09:39.679 00:09:39.679 Persistent Memory Region Support 00:09:39.679 ================================ 00:09:39.679 Supported: No 00:09:39.679 00:09:39.679 Admin Command Set Attributes 00:09:39.679 ============================ 00:09:39.679 Security Send/Receive: Not Supported 00:09:39.679 Format NVM: Supported 00:09:39.679 Firmware Activate/Download: Not Supported 00:09:39.679 Namespace Management: Supported 00:09:39.679 Device Self-Test: Not Supported 00:09:39.679 Directives: Supported 00:09:39.679 NVMe-MI: Not Supported 00:09:39.679 Virtualization Management: Not Supported 00:09:39.679 Doorbell Buffer Config: Supported 00:09:39.679 Get LBA Status Capability: Not Supported 00:09:39.679 Command & Feature Lockdown Capability: Not Supported 00:09:39.679 Abort Command Limit: 4 00:09:39.679 Async Event Request Limit: 4 00:09:39.679 Number of Firmware Slots: N/A 00:09:39.679 Firmware Slot 1 Read-Only: N/A 00:09:39.679 Firmware Activation Without Reset: N/A 00:09:39.679 Multiple Update Detection Support: N/A 00:09:39.679 Firmware Update Granularity: No Information Provided 00:09:39.679 Per-Namespace SMART Log: Yes 00:09:39.679 Asymmetric Namespace Access Log Page: Not Supported 00:09:39.679 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:39.679 Command Effects Log Page: Supported 00:09:39.679 Get Log Page Extended Data: Supported 00:09:39.679 Telemetry Log Pages: Not Supported 00:09:39.679 Persistent Event Log Pages: Not Supported 00:09:39.679 Supported Log Pages Log Page: May Support 00:09:39.679 Commands Supported & Effects Log Page: Not Supported 00:09:39.679 Feature Identifiers & Effects Log Page:May Support 00:09:39.679 NVMe-MI Commands & Effects Log Page: May Support 00:09:39.679 Data Area 4 for Telemetry Log: Not Supported 00:09:39.679 Error Log Page Entries Supported: 1 00:09:39.679 Keep Alive: Not Supported 00:09:39.679 00:09:39.679 NVM Command Set Attributes 00:09:39.679 ========================== 00:09:39.679 Submission Queue Entry Size 00:09:39.679 Max: 64 00:09:39.679 Min: 64 00:09:39.679 Completion Queue Entry Size 00:09:39.679 Max: 16 00:09:39.679 Min: 16 00:09:39.679 Number of Namespaces: 256 00:09:39.679 Compare Command: Supported 00:09:39.679 Write Uncorrectable Command: Not Supported 00:09:39.679 Dataset Management Command: Supported 00:09:39.679 Write Zeroes Command: Supported 00:09:39.679 Set Features Save Field: Supported 00:09:39.679 Reservations: Not Supported 00:09:39.679 Timestamp: Supported 00:09:39.679 Copy: Supported 00:09:39.679 Volatile Write Cache: Present 00:09:39.679 Atomic Write Unit (Normal): 1 00:09:39.679 Atomic Write Unit (PFail): 1 00:09:39.679 Atomic Compare & Write Unit: 1 00:09:39.679 Fused Compare & Write: Not Supported 00:09:39.679 Scatter-Gather List 00:09:39.679 SGL Command Set: Supported 00:09:39.679 SGL Keyed: Not Supported 00:09:39.679 SGL Bit Bucket Descriptor: Not Supported 00:09:39.679 SGL Metadata Pointer: Not Supported 00:09:39.679 Oversized SGL: Not Supported 00:09:39.679 SGL Metadata Address: Not Supported 00:09:39.679 SGL Offset: Not Supported 00:09:39.679 Transport SGL Data Block: Not Supported 00:09:39.679 Replay Protected Memory Block: Not Supported 00:09:39.679 00:09:39.679 Firmware Slot Information 00:09:39.679 ========================= 00:09:39.679 Active slot: 1 00:09:39.679 Slot 1 Firmware Revision: 1.0 00:09:39.679 00:09:39.679 00:09:39.679 Commands Supported and Effects 00:09:39.679 ============================== 00:09:39.679 Admin Commands 00:09:39.679 -------------- 00:09:39.679 Delete I/O Submission Queue (00h): Supported 00:09:39.679 Create I/O Submission Queue (01h): Supported 00:09:39.679 Get Log Page (02h): Supported 00:09:39.679 Delete I/O Completion Queue (04h): Supported 00:09:39.679 Create I/O Completion Queue (05h): Supported 00:09:39.679 Identify (06h): Supported 00:09:39.679 Abort (08h): Supported 00:09:39.679 Set Features (09h): Supported 00:09:39.679 Get Features (0Ah): Supported 00:09:39.679 Asynchronous Event Request (0Ch): Supported 00:09:39.679 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:39.679 Directive Send (19h): Supported 00:09:39.679 Directive Receive (1Ah): Supported 00:09:39.679 Virtualization Management (1Ch): Supported 00:09:39.679 Doorbell Buffer Config (7Ch): Supported 00:09:39.679 Format NVM (80h): Supported LBA-Change 00:09:39.679 I/O Commands 00:09:39.679 ------------ 00:09:39.679 Flush (00h): Supported LBA-Change 00:09:39.679 Write (01h): Supported LBA-Change 00:09:39.679 Read (02h): Supported 00:09:39.679 Compare (05h): Supported 00:09:39.680 Write Zeroes (08h): Supported LBA-Change 00:09:39.680 Dataset Management (09h): Supported LBA-Change 00:09:39.680 Unknown (0Ch): Supported 00:09:39.680 Unknown (12h): Supported 00:09:39.680 Copy (19h): Supported LBA-Change 00:09:39.680 Unknown (1Dh): Supported LBA-Change 00:09:39.680 00:09:39.680 Error Log 00:09:39.680 ========= 00:09:39.680 00:09:39.680 Arbitration 00:09:39.680 =========== 00:09:39.680 Arbitration Burst: no limit 00:09:39.680 00:09:39.680 Power Management 00:09:39.680 ================ 00:09:39.680 Number of Power States: 1 00:09:39.680 Current Power State: Power State #0 00:09:39.680 Power State #0: 00:09:39.680 Max Power: 25.00 W 00:09:39.680 Non-Operational State: Operational 00:09:39.680 Entry Latency: 16 microseconds 00:09:39.680 Exit Latency: 4 microseconds 00:09:39.680 Relative Read Throughput: 0 00:09:39.680 Relative Read Latency: 0 00:09:39.680 Relative Write Throughput: 0 00:09:39.680 Relative Write Latency: 0 00:09:39.680 Idle Power: Not Reported 00:09:39.680 Active Power: Not Reported 00:09:39.680 Non-Operational Permissive Mode: Not Supported 00:09:39.680 00:09:39.680 Health Information 00:09:39.680 ================== 00:09:39.680 Critical Warnings: 00:09:39.680 Available Spare Space: OK 00:09:39.680 Temperature: OK 00:09:39.680 Device Reliability: OK 00:09:39.680 Read Only: No 00:09:39.680 Volatile Memory Backup: OK 00:09:39.680 Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.680 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:39.680 Available Spare: 0% 00:09:39.680 Available Spare Threshold: 0% 00:09:39.680 Life Percentage Used: 0% 00:09:39.680 Data Units Read: 660 00:09:39.680 Data Units Written: 588 00:09:39.680 Host Read Commands: 33556 00:09:39.680 Host Write Commands: 33342 00:09:39.680 Controller Busy Time: 0 minutes 00:09:39.680 Power Cycles: 0 00:09:39.680 Power On Hours: 0 hours 00:09:39.680 Unsafe Shutdowns: 0 00:09:39.680 Unrecoverable Media Errors: 0 00:09:39.680 Lifetime Error Log Entries: 0 00:09:39.680 Warning Temperature Time: 0 minutes 00:09:39.680 Critical Temperature Time: 0 minutes 00:09:39.680 00:09:39.680 Number of Queues 00:09:39.680 ================ 00:09:39.680 Number of I/O Submission Queues: 64 00:09:39.680 Number of I/O Completion Queues: 64 00:09:39.680 00:09:39.680 ZNS Specific Controller Data 00:09:39.680 ============================ 00:09:39.680 Zone Append Size Limit: 0 00:09:39.680 00:09:39.680 00:09:39.680 Active Namespaces 00:09:39.680 ================= 00:09:39.680 Namespace ID:1 00:09:39.680 Error Recovery Timeout: Unlimited 00:09:39.680 Command Set Identifier: NVM (00h) 00:09:39.680 Deallocate: Supported 00:09:39.680 Deallocated/Unwritten Error: Supported 00:09:39.680 Deallocated Read Value: All 0x00 00:09:39.680 Deallocate in Write Zeroes: Not Supported 00:09:39.680 Deallocated Guard Field: 0xFFFF 00:09:39.680 Flush: Supported 00:09:39.680 Reservation: Not Supported 00:09:39.680 Metadata Transferred as: Separate Metadata Buffer 00:09:39.680 Namespace Sharing Capabilities: Private 00:09:39.680 Size (in LBAs): 1548666 (5GiB) 00:09:39.680 Capacity (in LBAs): 1548666 (5GiB) 00:09:39.680 Utilization (in LBAs): 1548666 (5GiB) 00:09:39.680 Thin Provisioning: Not Supported 00:09:39.680 Per-NS Atomic Units: No 00:09:39.680 Maximum Single Source Range Length: 128 00:09:39.680 Maximum Copy Length: 128 00:09:39.680 Maximum Source Range Count: 128 00:09:39.680 NGUID/EUI64 Never Reused: No 00:09:39.680 Namespace Write Protected: No 00:09:39.680 Number of LBA Formats: 8 00:09:39.680 Current LBA Format: LBA Format #07 00:09:39.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:39.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:39.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:39.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:39.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:39.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:39.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:39.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:39.680 00:09:39.680 NVM Specific Namespace Data 00:09:39.680 =========================== 00:09:39.680 Logical Block Storage Tag Mask: 0 00:09:39.680 Protection Information Capabilities: 00:09:39.680 16b Guard Protection Information Storage Tag Support: No 00:09:39.680 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:39.680 Storage Tag Check Read Support: No 00:09:39.680 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.680 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:39.938 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:39.938 18:54:10 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:40.195 ===================================================== 00:09:40.195 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:40.195 ===================================================== 00:09:40.195 Controller Capabilities/Features 00:09:40.195 ================================ 00:09:40.195 Vendor ID: 1b36 00:09:40.195 Subsystem Vendor ID: 1af4 00:09:40.195 Serial Number: 12341 00:09:40.195 Model Number: QEMU NVMe Ctrl 00:09:40.195 Firmware Version: 8.0.0 00:09:40.195 Recommended Arb Burst: 6 00:09:40.195 IEEE OUI Identifier: 00 54 52 00:09:40.195 Multi-path I/O 00:09:40.195 May have multiple subsystem ports: No 00:09:40.195 May have multiple controllers: No 00:09:40.195 Associated with SR-IOV VF: No 00:09:40.195 Max Data Transfer Size: 524288 00:09:40.195 Max Number of Namespaces: 256 00:09:40.195 Max Number of I/O Queues: 64 00:09:40.195 NVMe Specification Version (VS): 1.4 00:09:40.195 NVMe Specification Version (Identify): 1.4 00:09:40.195 Maximum Queue Entries: 2048 00:09:40.195 Contiguous Queues Required: Yes 00:09:40.195 Arbitration Mechanisms Supported 00:09:40.195 Weighted Round Robin: Not Supported 00:09:40.195 Vendor Specific: Not Supported 00:09:40.195 Reset Timeout: 7500 ms 00:09:40.195 Doorbell Stride: 4 bytes 00:09:40.195 NVM Subsystem Reset: Not Supported 00:09:40.195 Command Sets Supported 00:09:40.195 NVM Command Set: Supported 00:09:40.196 Boot Partition: Not Supported 00:09:40.196 Memory Page Size Minimum: 4096 bytes 00:09:40.196 Memory Page Size Maximum: 65536 bytes 00:09:40.196 Persistent Memory Region: Not Supported 00:09:40.196 Optional Asynchronous Events Supported 00:09:40.196 Namespace Attribute Notices: Supported 00:09:40.196 Firmware Activation Notices: Not Supported 00:09:40.196 ANA Change Notices: Not Supported 00:09:40.196 PLE Aggregate Log Change Notices: Not Supported 00:09:40.196 LBA Status Info Alert Notices: Not Supported 00:09:40.196 EGE Aggregate Log Change Notices: Not Supported 00:09:40.196 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.196 Zone Descriptor Change Notices: Not Supported 00:09:40.196 Discovery Log Change Notices: Not Supported 00:09:40.196 Controller Attributes 00:09:40.196 128-bit Host Identifier: Not Supported 00:09:40.196 Non-Operational Permissive Mode: Not Supported 00:09:40.196 NVM Sets: Not Supported 00:09:40.196 Read Recovery Levels: Not Supported 00:09:40.196 Endurance Groups: Not Supported 00:09:40.196 Predictable Latency Mode: Not Supported 00:09:40.196 Traffic Based Keep ALive: Not Supported 00:09:40.196 Namespace Granularity: Not Supported 00:09:40.196 SQ Associations: Not Supported 00:09:40.196 UUID List: Not Supported 00:09:40.196 Multi-Domain Subsystem: Not Supported 00:09:40.196 Fixed Capacity Management: Not Supported 00:09:40.196 Variable Capacity Management: Not Supported 00:09:40.196 Delete Endurance Group: Not Supported 00:09:40.196 Delete NVM Set: Not Supported 00:09:40.196 Extended LBA Formats Supported: Supported 00:09:40.196 Flexible Data Placement Supported: Not Supported 00:09:40.196 00:09:40.196 Controller Memory Buffer Support 00:09:40.196 ================================ 00:09:40.196 Supported: No 00:09:40.196 00:09:40.196 Persistent Memory Region Support 00:09:40.196 ================================ 00:09:40.196 Supported: No 00:09:40.196 00:09:40.196 Admin Command Set Attributes 00:09:40.196 ============================ 00:09:40.196 Security Send/Receive: Not Supported 00:09:40.196 Format NVM: Supported 00:09:40.196 Firmware Activate/Download: Not Supported 00:09:40.196 Namespace Management: Supported 00:09:40.196 Device Self-Test: Not Supported 00:09:40.196 Directives: Supported 00:09:40.196 NVMe-MI: Not Supported 00:09:40.196 Virtualization Management: Not Supported 00:09:40.196 Doorbell Buffer Config: Supported 00:09:40.196 Get LBA Status Capability: Not Supported 00:09:40.196 Command & Feature Lockdown Capability: Not Supported 00:09:40.196 Abort Command Limit: 4 00:09:40.196 Async Event Request Limit: 4 00:09:40.196 Number of Firmware Slots: N/A 00:09:40.196 Firmware Slot 1 Read-Only: N/A 00:09:40.196 Firmware Activation Without Reset: N/A 00:09:40.196 Multiple Update Detection Support: N/A 00:09:40.196 Firmware Update Granularity: No Information Provided 00:09:40.196 Per-Namespace SMART Log: Yes 00:09:40.196 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.196 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:40.196 Command Effects Log Page: Supported 00:09:40.196 Get Log Page Extended Data: Supported 00:09:40.196 Telemetry Log Pages: Not Supported 00:09:40.196 Persistent Event Log Pages: Not Supported 00:09:40.196 Supported Log Pages Log Page: May Support 00:09:40.196 Commands Supported & Effects Log Page: Not Supported 00:09:40.196 Feature Identifiers & Effects Log Page:May Support 00:09:40.196 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.196 Data Area 4 for Telemetry Log: Not Supported 00:09:40.196 Error Log Page Entries Supported: 1 00:09:40.196 Keep Alive: Not Supported 00:09:40.196 00:09:40.196 NVM Command Set Attributes 00:09:40.196 ========================== 00:09:40.196 Submission Queue Entry Size 00:09:40.196 Max: 64 00:09:40.196 Min: 64 00:09:40.196 Completion Queue Entry Size 00:09:40.196 Max: 16 00:09:40.196 Min: 16 00:09:40.196 Number of Namespaces: 256 00:09:40.196 Compare Command: Supported 00:09:40.196 Write Uncorrectable Command: Not Supported 00:09:40.196 Dataset Management Command: Supported 00:09:40.196 Write Zeroes Command: Supported 00:09:40.196 Set Features Save Field: Supported 00:09:40.196 Reservations: Not Supported 00:09:40.196 Timestamp: Supported 00:09:40.196 Copy: Supported 00:09:40.196 Volatile Write Cache: Present 00:09:40.196 Atomic Write Unit (Normal): 1 00:09:40.196 Atomic Write Unit (PFail): 1 00:09:40.196 Atomic Compare & Write Unit: 1 00:09:40.196 Fused Compare & Write: Not Supported 00:09:40.196 Scatter-Gather List 00:09:40.196 SGL Command Set: Supported 00:09:40.196 SGL Keyed: Not Supported 00:09:40.196 SGL Bit Bucket Descriptor: Not Supported 00:09:40.196 SGL Metadata Pointer: Not Supported 00:09:40.196 Oversized SGL: Not Supported 00:09:40.196 SGL Metadata Address: Not Supported 00:09:40.196 SGL Offset: Not Supported 00:09:40.196 Transport SGL Data Block: Not Supported 00:09:40.196 Replay Protected Memory Block: Not Supported 00:09:40.196 00:09:40.196 Firmware Slot Information 00:09:40.196 ========================= 00:09:40.196 Active slot: 1 00:09:40.196 Slot 1 Firmware Revision: 1.0 00:09:40.196 00:09:40.196 00:09:40.196 Commands Supported and Effects 00:09:40.196 ============================== 00:09:40.196 Admin Commands 00:09:40.196 -------------- 00:09:40.196 Delete I/O Submission Queue (00h): Supported 00:09:40.196 Create I/O Submission Queue (01h): Supported 00:09:40.196 Get Log Page (02h): Supported 00:09:40.196 Delete I/O Completion Queue (04h): Supported 00:09:40.196 Create I/O Completion Queue (05h): Supported 00:09:40.196 Identify (06h): Supported 00:09:40.196 Abort (08h): Supported 00:09:40.196 Set Features (09h): Supported 00:09:40.196 Get Features (0Ah): Supported 00:09:40.196 Asynchronous Event Request (0Ch): Supported 00:09:40.196 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.196 Directive Send (19h): Supported 00:09:40.196 Directive Receive (1Ah): Supported 00:09:40.196 Virtualization Management (1Ch): Supported 00:09:40.196 Doorbell Buffer Config (7Ch): Supported 00:09:40.196 Format NVM (80h): Supported LBA-Change 00:09:40.196 I/O Commands 00:09:40.196 ------------ 00:09:40.196 Flush (00h): Supported LBA-Change 00:09:40.196 Write (01h): Supported LBA-Change 00:09:40.196 Read (02h): Supported 00:09:40.196 Compare (05h): Supported 00:09:40.196 Write Zeroes (08h): Supported LBA-Change 00:09:40.196 Dataset Management (09h): Supported LBA-Change 00:09:40.196 Unknown (0Ch): Supported 00:09:40.196 Unknown (12h): Supported 00:09:40.196 Copy (19h): Supported LBA-Change 00:09:40.196 Unknown (1Dh): Supported LBA-Change 00:09:40.196 00:09:40.196 Error Log 00:09:40.196 ========= 00:09:40.196 00:09:40.196 Arbitration 00:09:40.196 =========== 00:09:40.196 Arbitration Burst: no limit 00:09:40.196 00:09:40.196 Power Management 00:09:40.196 ================ 00:09:40.196 Number of Power States: 1 00:09:40.196 Current Power State: Power State #0 00:09:40.196 Power State #0: 00:09:40.196 Max Power: 25.00 W 00:09:40.196 Non-Operational State: Operational 00:09:40.196 Entry Latency: 16 microseconds 00:09:40.196 Exit Latency: 4 microseconds 00:09:40.196 Relative Read Throughput: 0 00:09:40.196 Relative Read Latency: 0 00:09:40.196 Relative Write Throughput: 0 00:09:40.196 Relative Write Latency: 0 00:09:40.196 Idle Power: Not Reported 00:09:40.196 Active Power: Not Reported 00:09:40.196 Non-Operational Permissive Mode: Not Supported 00:09:40.196 00:09:40.196 Health Information 00:09:40.196 ================== 00:09:40.196 Critical Warnings: 00:09:40.196 Available Spare Space: OK 00:09:40.196 Temperature: OK 00:09:40.196 Device Reliability: OK 00:09:40.196 Read Only: No 00:09:40.196 Volatile Memory Backup: OK 00:09:40.196 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.196 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.196 Available Spare: 0% 00:09:40.196 Available Spare Threshold: 0% 00:09:40.196 Life Percentage Used: 0% 00:09:40.196 Data Units Read: 1017 00:09:40.196 Data Units Written: 890 00:09:40.196 Host Read Commands: 49537 00:09:40.196 Host Write Commands: 48427 00:09:40.196 Controller Busy Time: 0 minutes 00:09:40.196 Power Cycles: 0 00:09:40.196 Power On Hours: 0 hours 00:09:40.196 Unsafe Shutdowns: 0 00:09:40.196 Unrecoverable Media Errors: 0 00:09:40.196 Lifetime Error Log Entries: 0 00:09:40.196 Warning Temperature Time: 0 minutes 00:09:40.196 Critical Temperature Time: 0 minutes 00:09:40.196 00:09:40.196 Number of Queues 00:09:40.196 ================ 00:09:40.196 Number of I/O Submission Queues: 64 00:09:40.196 Number of I/O Completion Queues: 64 00:09:40.196 00:09:40.196 ZNS Specific Controller Data 00:09:40.196 ============================ 00:09:40.196 Zone Append Size Limit: 0 00:09:40.196 00:09:40.196 00:09:40.196 Active Namespaces 00:09:40.196 ================= 00:09:40.196 Namespace ID:1 00:09:40.196 Error Recovery Timeout: Unlimited 00:09:40.196 Command Set Identifier: NVM (00h) 00:09:40.196 Deallocate: Supported 00:09:40.196 Deallocated/Unwritten Error: Supported 00:09:40.196 Deallocated Read Value: All 0x00 00:09:40.196 Deallocate in Write Zeroes: Not Supported 00:09:40.196 Deallocated Guard Field: 0xFFFF 00:09:40.196 Flush: Supported 00:09:40.196 Reservation: Not Supported 00:09:40.196 Namespace Sharing Capabilities: Private 00:09:40.196 Size (in LBAs): 1310720 (5GiB) 00:09:40.196 Capacity (in LBAs): 1310720 (5GiB) 00:09:40.196 Utilization (in LBAs): 1310720 (5GiB) 00:09:40.196 Thin Provisioning: Not Supported 00:09:40.196 Per-NS Atomic Units: No 00:09:40.196 Maximum Single Source Range Length: 128 00:09:40.196 Maximum Copy Length: 128 00:09:40.196 Maximum Source Range Count: 128 00:09:40.196 NGUID/EUI64 Never Reused: No 00:09:40.196 Namespace Write Protected: No 00:09:40.196 Number of LBA Formats: 8 00:09:40.196 Current LBA Format: LBA Format #04 00:09:40.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.196 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.196 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.196 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.196 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.196 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.196 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.196 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.196 00:09:40.196 NVM Specific Namespace Data 00:09:40.196 =========================== 00:09:40.196 Logical Block Storage Tag Mask: 0 00:09:40.196 Protection Information Capabilities: 00:09:40.196 16b Guard Protection Information Storage Tag Support: No 00:09:40.196 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.196 Storage Tag Check Read Support: No 00:09:40.196 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.196 18:54:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:40.196 18:54:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:40.454 ===================================================== 00:09:40.454 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:40.454 ===================================================== 00:09:40.454 Controller Capabilities/Features 00:09:40.454 ================================ 00:09:40.454 Vendor ID: 1b36 00:09:40.454 Subsystem Vendor ID: 1af4 00:09:40.454 Serial Number: 12342 00:09:40.454 Model Number: QEMU NVMe Ctrl 00:09:40.454 Firmware Version: 8.0.0 00:09:40.454 Recommended Arb Burst: 6 00:09:40.454 IEEE OUI Identifier: 00 54 52 00:09:40.454 Multi-path I/O 00:09:40.454 May have multiple subsystem ports: No 00:09:40.454 May have multiple controllers: No 00:09:40.454 Associated with SR-IOV VF: No 00:09:40.454 Max Data Transfer Size: 524288 00:09:40.454 Max Number of Namespaces: 256 00:09:40.454 Max Number of I/O Queues: 64 00:09:40.454 NVMe Specification Version (VS): 1.4 00:09:40.454 NVMe Specification Version (Identify): 1.4 00:09:40.454 Maximum Queue Entries: 2048 00:09:40.454 Contiguous Queues Required: Yes 00:09:40.454 Arbitration Mechanisms Supported 00:09:40.454 Weighted Round Robin: Not Supported 00:09:40.454 Vendor Specific: Not Supported 00:09:40.454 Reset Timeout: 7500 ms 00:09:40.454 Doorbell Stride: 4 bytes 00:09:40.454 NVM Subsystem Reset: Not Supported 00:09:40.454 Command Sets Supported 00:09:40.454 NVM Command Set: Supported 00:09:40.454 Boot Partition: Not Supported 00:09:40.454 Memory Page Size Minimum: 4096 bytes 00:09:40.454 Memory Page Size Maximum: 65536 bytes 00:09:40.454 Persistent Memory Region: Not Supported 00:09:40.454 Optional Asynchronous Events Supported 00:09:40.454 Namespace Attribute Notices: Supported 00:09:40.454 Firmware Activation Notices: Not Supported 00:09:40.454 ANA Change Notices: Not Supported 00:09:40.454 PLE Aggregate Log Change Notices: Not Supported 00:09:40.454 LBA Status Info Alert Notices: Not Supported 00:09:40.454 EGE Aggregate Log Change Notices: Not Supported 00:09:40.454 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.454 Zone Descriptor Change Notices: Not Supported 00:09:40.454 Discovery Log Change Notices: Not Supported 00:09:40.454 Controller Attributes 00:09:40.454 128-bit Host Identifier: Not Supported 00:09:40.454 Non-Operational Permissive Mode: Not Supported 00:09:40.454 NVM Sets: Not Supported 00:09:40.454 Read Recovery Levels: Not Supported 00:09:40.454 Endurance Groups: Not Supported 00:09:40.454 Predictable Latency Mode: Not Supported 00:09:40.454 Traffic Based Keep ALive: Not Supported 00:09:40.454 Namespace Granularity: Not Supported 00:09:40.454 SQ Associations: Not Supported 00:09:40.454 UUID List: Not Supported 00:09:40.454 Multi-Domain Subsystem: Not Supported 00:09:40.454 Fixed Capacity Management: Not Supported 00:09:40.454 Variable Capacity Management: Not Supported 00:09:40.454 Delete Endurance Group: Not Supported 00:09:40.454 Delete NVM Set: Not Supported 00:09:40.454 Extended LBA Formats Supported: Supported 00:09:40.454 Flexible Data Placement Supported: Not Supported 00:09:40.454 00:09:40.454 Controller Memory Buffer Support 00:09:40.454 ================================ 00:09:40.454 Supported: No 00:09:40.454 00:09:40.454 Persistent Memory Region Support 00:09:40.454 ================================ 00:09:40.454 Supported: No 00:09:40.454 00:09:40.454 Admin Command Set Attributes 00:09:40.454 ============================ 00:09:40.454 Security Send/Receive: Not Supported 00:09:40.454 Format NVM: Supported 00:09:40.454 Firmware Activate/Download: Not Supported 00:09:40.454 Namespace Management: Supported 00:09:40.454 Device Self-Test: Not Supported 00:09:40.454 Directives: Supported 00:09:40.454 NVMe-MI: Not Supported 00:09:40.454 Virtualization Management: Not Supported 00:09:40.454 Doorbell Buffer Config: Supported 00:09:40.454 Get LBA Status Capability: Not Supported 00:09:40.454 Command & Feature Lockdown Capability: Not Supported 00:09:40.454 Abort Command Limit: 4 00:09:40.454 Async Event Request Limit: 4 00:09:40.454 Number of Firmware Slots: N/A 00:09:40.454 Firmware Slot 1 Read-Only: N/A 00:09:40.454 Firmware Activation Without Reset: N/A 00:09:40.454 Multiple Update Detection Support: N/A 00:09:40.454 Firmware Update Granularity: No Information Provided 00:09:40.454 Per-Namespace SMART Log: Yes 00:09:40.454 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.454 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:40.454 Command Effects Log Page: Supported 00:09:40.454 Get Log Page Extended Data: Supported 00:09:40.454 Telemetry Log Pages: Not Supported 00:09:40.454 Persistent Event Log Pages: Not Supported 00:09:40.454 Supported Log Pages Log Page: May Support 00:09:40.454 Commands Supported & Effects Log Page: Not Supported 00:09:40.454 Feature Identifiers & Effects Log Page:May Support 00:09:40.454 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.454 Data Area 4 for Telemetry Log: Not Supported 00:09:40.454 Error Log Page Entries Supported: 1 00:09:40.454 Keep Alive: Not Supported 00:09:40.454 00:09:40.454 NVM Command Set Attributes 00:09:40.454 ========================== 00:09:40.454 Submission Queue Entry Size 00:09:40.454 Max: 64 00:09:40.454 Min: 64 00:09:40.454 Completion Queue Entry Size 00:09:40.454 Max: 16 00:09:40.454 Min: 16 00:09:40.454 Number of Namespaces: 256 00:09:40.454 Compare Command: Supported 00:09:40.454 Write Uncorrectable Command: Not Supported 00:09:40.454 Dataset Management Command: Supported 00:09:40.454 Write Zeroes Command: Supported 00:09:40.454 Set Features Save Field: Supported 00:09:40.454 Reservations: Not Supported 00:09:40.454 Timestamp: Supported 00:09:40.454 Copy: Supported 00:09:40.454 Volatile Write Cache: Present 00:09:40.454 Atomic Write Unit (Normal): 1 00:09:40.454 Atomic Write Unit (PFail): 1 00:09:40.454 Atomic Compare & Write Unit: 1 00:09:40.454 Fused Compare & Write: Not Supported 00:09:40.454 Scatter-Gather List 00:09:40.454 SGL Command Set: Supported 00:09:40.454 SGL Keyed: Not Supported 00:09:40.454 SGL Bit Bucket Descriptor: Not Supported 00:09:40.454 SGL Metadata Pointer: Not Supported 00:09:40.454 Oversized SGL: Not Supported 00:09:40.454 SGL Metadata Address: Not Supported 00:09:40.454 SGL Offset: Not Supported 00:09:40.454 Transport SGL Data Block: Not Supported 00:09:40.454 Replay Protected Memory Block: Not Supported 00:09:40.454 00:09:40.454 Firmware Slot Information 00:09:40.454 ========================= 00:09:40.454 Active slot: 1 00:09:40.454 Slot 1 Firmware Revision: 1.0 00:09:40.454 00:09:40.454 00:09:40.454 Commands Supported and Effects 00:09:40.454 ============================== 00:09:40.454 Admin Commands 00:09:40.454 -------------- 00:09:40.454 Delete I/O Submission Queue (00h): Supported 00:09:40.454 Create I/O Submission Queue (01h): Supported 00:09:40.454 Get Log Page (02h): Supported 00:09:40.454 Delete I/O Completion Queue (04h): Supported 00:09:40.454 Create I/O Completion Queue (05h): Supported 00:09:40.454 Identify (06h): Supported 00:09:40.454 Abort (08h): Supported 00:09:40.454 Set Features (09h): Supported 00:09:40.454 Get Features (0Ah): Supported 00:09:40.454 Asynchronous Event Request (0Ch): Supported 00:09:40.454 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.454 Directive Send (19h): Supported 00:09:40.454 Directive Receive (1Ah): Supported 00:09:40.454 Virtualization Management (1Ch): Supported 00:09:40.454 Doorbell Buffer Config (7Ch): Supported 00:09:40.454 Format NVM (80h): Supported LBA-Change 00:09:40.454 I/O Commands 00:09:40.454 ------------ 00:09:40.454 Flush (00h): Supported LBA-Change 00:09:40.454 Write (01h): Supported LBA-Change 00:09:40.454 Read (02h): Supported 00:09:40.454 Compare (05h): Supported 00:09:40.454 Write Zeroes (08h): Supported LBA-Change 00:09:40.454 Dataset Management (09h): Supported LBA-Change 00:09:40.454 Unknown (0Ch): Supported 00:09:40.454 Unknown (12h): Supported 00:09:40.454 Copy (19h): Supported LBA-Change 00:09:40.454 Unknown (1Dh): Supported LBA-Change 00:09:40.454 00:09:40.454 Error Log 00:09:40.454 ========= 00:09:40.454 00:09:40.454 Arbitration 00:09:40.454 =========== 00:09:40.454 Arbitration Burst: no limit 00:09:40.454 00:09:40.454 Power Management 00:09:40.454 ================ 00:09:40.454 Number of Power States: 1 00:09:40.454 Current Power State: Power State #0 00:09:40.454 Power State #0: 00:09:40.454 Max Power: 25.00 W 00:09:40.454 Non-Operational State: Operational 00:09:40.454 Entry Latency: 16 microseconds 00:09:40.454 Exit Latency: 4 microseconds 00:09:40.454 Relative Read Throughput: 0 00:09:40.454 Relative Read Latency: 0 00:09:40.454 Relative Write Throughput: 0 00:09:40.454 Relative Write Latency: 0 00:09:40.454 Idle Power: Not Reported 00:09:40.454 Active Power: Not Reported 00:09:40.454 Non-Operational Permissive Mode: Not Supported 00:09:40.454 00:09:40.454 Health Information 00:09:40.454 ================== 00:09:40.454 Critical Warnings: 00:09:40.454 Available Spare Space: OK 00:09:40.454 Temperature: OK 00:09:40.454 Device Reliability: OK 00:09:40.454 Read Only: No 00:09:40.454 Volatile Memory Backup: OK 00:09:40.454 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.454 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.454 Available Spare: 0% 00:09:40.454 Available Spare Threshold: 0% 00:09:40.454 Life Percentage Used: 0% 00:09:40.454 Data Units Read: 2088 00:09:40.454 Data Units Written: 1875 00:09:40.454 Host Read Commands: 102414 00:09:40.454 Host Write Commands: 100683 00:09:40.454 Controller Busy Time: 0 minutes 00:09:40.454 Power Cycles: 0 00:09:40.454 Power On Hours: 0 hours 00:09:40.454 Unsafe Shutdowns: 0 00:09:40.454 Unrecoverable Media Errors: 0 00:09:40.454 Lifetime Error Log Entries: 0 00:09:40.454 Warning Temperature Time: 0 minutes 00:09:40.454 Critical Temperature Time: 0 minutes 00:09:40.454 00:09:40.454 Number of Queues 00:09:40.454 ================ 00:09:40.454 Number of I/O Submission Queues: 64 00:09:40.454 Number of I/O Completion Queues: 64 00:09:40.454 00:09:40.454 ZNS Specific Controller Data 00:09:40.454 ============================ 00:09:40.454 Zone Append Size Limit: 0 00:09:40.454 00:09:40.454 00:09:40.454 Active Namespaces 00:09:40.454 ================= 00:09:40.454 Namespace ID:1 00:09:40.454 Error Recovery Timeout: Unlimited 00:09:40.454 Command Set Identifier: NVM (00h) 00:09:40.454 Deallocate: Supported 00:09:40.454 Deallocated/Unwritten Error: Supported 00:09:40.454 Deallocated Read Value: All 0x00 00:09:40.454 Deallocate in Write Zeroes: Not Supported 00:09:40.454 Deallocated Guard Field: 0xFFFF 00:09:40.454 Flush: Supported 00:09:40.454 Reservation: Not Supported 00:09:40.454 Namespace Sharing Capabilities: Private 00:09:40.455 Size (in LBAs): 1048576 (4GiB) 00:09:40.455 Capacity (in LBAs): 1048576 (4GiB) 00:09:40.455 Utilization (in LBAs): 1048576 (4GiB) 00:09:40.455 Thin Provisioning: Not Supported 00:09:40.455 Per-NS Atomic Units: No 00:09:40.455 Maximum Single Source Range Length: 128 00:09:40.455 Maximum Copy Length: 128 00:09:40.455 Maximum Source Range Count: 128 00:09:40.455 NGUID/EUI64 Never Reused: No 00:09:40.455 Namespace Write Protected: No 00:09:40.455 Number of LBA Formats: 8 00:09:40.455 Current LBA Format: LBA Format #04 00:09:40.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.455 00:09:40.455 NVM Specific Namespace Data 00:09:40.455 =========================== 00:09:40.455 Logical Block Storage Tag Mask: 0 00:09:40.455 Protection Information Capabilities: 00:09:40.455 16b Guard Protection Information Storage Tag Support: No 00:09:40.455 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.455 Storage Tag Check Read Support: No 00:09:40.455 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Namespace ID:2 00:09:40.455 Error Recovery Timeout: Unlimited 00:09:40.455 Command Set Identifier: NVM (00h) 00:09:40.455 Deallocate: Supported 00:09:40.455 Deallocated/Unwritten Error: Supported 00:09:40.455 Deallocated Read Value: All 0x00 00:09:40.455 Deallocate in Write Zeroes: Not Supported 00:09:40.455 Deallocated Guard Field: 0xFFFF 00:09:40.455 Flush: Supported 00:09:40.455 Reservation: Not Supported 00:09:40.455 Namespace Sharing Capabilities: Private 00:09:40.455 Size (in LBAs): 1048576 (4GiB) 00:09:40.455 Capacity (in LBAs): 1048576 (4GiB) 00:09:40.455 Utilization (in LBAs): 1048576 (4GiB) 00:09:40.455 Thin Provisioning: Not Supported 00:09:40.455 Per-NS Atomic Units: No 00:09:40.455 Maximum Single Source Range Length: 128 00:09:40.455 Maximum Copy Length: 128 00:09:40.455 Maximum Source Range Count: 128 00:09:40.455 NGUID/EUI64 Never Reused: No 00:09:40.455 Namespace Write Protected: No 00:09:40.455 Number of LBA Formats: 8 00:09:40.455 Current LBA Format: LBA Format #04 00:09:40.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.455 00:09:40.455 NVM Specific Namespace Data 00:09:40.455 =========================== 00:09:40.455 Logical Block Storage Tag Mask: 0 00:09:40.455 Protection Information Capabilities: 00:09:40.455 16b Guard Protection Information Storage Tag Support: No 00:09:40.455 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.455 Storage Tag Check Read Support: No 00:09:40.455 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Namespace ID:3 00:09:40.455 Error Recovery Timeout: Unlimited 00:09:40.455 Command Set Identifier: NVM (00h) 00:09:40.455 Deallocate: Supported 00:09:40.455 Deallocated/Unwritten Error: Supported 00:09:40.455 Deallocated Read Value: All 0x00 00:09:40.455 Deallocate in Write Zeroes: Not Supported 00:09:40.455 Deallocated Guard Field: 0xFFFF 00:09:40.455 Flush: Supported 00:09:40.455 Reservation: Not Supported 00:09:40.455 Namespace Sharing Capabilities: Private 00:09:40.455 Size (in LBAs): 1048576 (4GiB) 00:09:40.455 Capacity (in LBAs): 1048576 (4GiB) 00:09:40.455 Utilization (in LBAs): 1048576 (4GiB) 00:09:40.455 Thin Provisioning: Not Supported 00:09:40.455 Per-NS Atomic Units: No 00:09:40.455 Maximum Single Source Range Length: 128 00:09:40.455 Maximum Copy Length: 128 00:09:40.455 Maximum Source Range Count: 128 00:09:40.455 NGUID/EUI64 Never Reused: No 00:09:40.455 Namespace Write Protected: No 00:09:40.455 Number of LBA Formats: 8 00:09:40.455 Current LBA Format: LBA Format #04 00:09:40.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.455 00:09:40.455 NVM Specific Namespace Data 00:09:40.455 =========================== 00:09:40.455 Logical Block Storage Tag Mask: 0 00:09:40.455 Protection Information Capabilities: 00:09:40.455 16b Guard Protection Information Storage Tag Support: No 00:09:40.455 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.455 Storage Tag Check Read Support: No 00:09:40.455 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.455 18:54:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:40.455 18:54:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:40.713 ===================================================== 00:09:40.713 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:40.713 ===================================================== 00:09:40.713 Controller Capabilities/Features 00:09:40.713 ================================ 00:09:40.713 Vendor ID: 1b36 00:09:40.713 Subsystem Vendor ID: 1af4 00:09:40.713 Serial Number: 12343 00:09:40.713 Model Number: QEMU NVMe Ctrl 00:09:40.713 Firmware Version: 8.0.0 00:09:40.713 Recommended Arb Burst: 6 00:09:40.713 IEEE OUI Identifier: 00 54 52 00:09:40.713 Multi-path I/O 00:09:40.713 May have multiple subsystem ports: No 00:09:40.713 May have multiple controllers: Yes 00:09:40.713 Associated with SR-IOV VF: No 00:09:40.713 Max Data Transfer Size: 524288 00:09:40.713 Max Number of Namespaces: 256 00:09:40.713 Max Number of I/O Queues: 64 00:09:40.713 NVMe Specification Version (VS): 1.4 00:09:40.713 NVMe Specification Version (Identify): 1.4 00:09:40.713 Maximum Queue Entries: 2048 00:09:40.713 Contiguous Queues Required: Yes 00:09:40.713 Arbitration Mechanisms Supported 00:09:40.713 Weighted Round Robin: Not Supported 00:09:40.713 Vendor Specific: Not Supported 00:09:40.713 Reset Timeout: 7500 ms 00:09:40.713 Doorbell Stride: 4 bytes 00:09:40.713 NVM Subsystem Reset: Not Supported 00:09:40.713 Command Sets Supported 00:09:40.713 NVM Command Set: Supported 00:09:40.713 Boot Partition: Not Supported 00:09:40.713 Memory Page Size Minimum: 4096 bytes 00:09:40.713 Memory Page Size Maximum: 65536 bytes 00:09:40.713 Persistent Memory Region: Not Supported 00:09:40.713 Optional Asynchronous Events Supported 00:09:40.713 Namespace Attribute Notices: Supported 00:09:40.713 Firmware Activation Notices: Not Supported 00:09:40.713 ANA Change Notices: Not Supported 00:09:40.713 PLE Aggregate Log Change Notices: Not Supported 00:09:40.713 LBA Status Info Alert Notices: Not Supported 00:09:40.713 EGE Aggregate Log Change Notices: Not Supported 00:09:40.713 Normal NVM Subsystem Shutdown event: Not Supported 00:09:40.713 Zone Descriptor Change Notices: Not Supported 00:09:40.713 Discovery Log Change Notices: Not Supported 00:09:40.713 Controller Attributes 00:09:40.713 128-bit Host Identifier: Not Supported 00:09:40.713 Non-Operational Permissive Mode: Not Supported 00:09:40.713 NVM Sets: Not Supported 00:09:40.713 Read Recovery Levels: Not Supported 00:09:40.713 Endurance Groups: Supported 00:09:40.713 Predictable Latency Mode: Not Supported 00:09:40.713 Traffic Based Keep ALive: Not Supported 00:09:40.714 Namespace Granularity: Not Supported 00:09:40.714 SQ Associations: Not Supported 00:09:40.714 UUID List: Not Supported 00:09:40.714 Multi-Domain Subsystem: Not Supported 00:09:40.714 Fixed Capacity Management: Not Supported 00:09:40.714 Variable Capacity Management: Not Supported 00:09:40.714 Delete Endurance Group: Not Supported 00:09:40.714 Delete NVM Set: Not Supported 00:09:40.714 Extended LBA Formats Supported: Supported 00:09:40.714 Flexible Data Placement Supported: Supported 00:09:40.714 00:09:40.714 Controller Memory Buffer Support 00:09:40.714 ================================ 00:09:40.714 Supported: No 00:09:40.714 00:09:40.714 Persistent Memory Region Support 00:09:40.714 ================================ 00:09:40.714 Supported: No 00:09:40.714 00:09:40.714 Admin Command Set Attributes 00:09:40.714 ============================ 00:09:40.714 Security Send/Receive: Not Supported 00:09:40.714 Format NVM: Supported 00:09:40.714 Firmware Activate/Download: Not Supported 00:09:40.714 Namespace Management: Supported 00:09:40.714 Device Self-Test: Not Supported 00:09:40.714 Directives: Supported 00:09:40.714 NVMe-MI: Not Supported 00:09:40.714 Virtualization Management: Not Supported 00:09:40.714 Doorbell Buffer Config: Supported 00:09:40.714 Get LBA Status Capability: Not Supported 00:09:40.714 Command & Feature Lockdown Capability: Not Supported 00:09:40.714 Abort Command Limit: 4 00:09:40.714 Async Event Request Limit: 4 00:09:40.714 Number of Firmware Slots: N/A 00:09:40.714 Firmware Slot 1 Read-Only: N/A 00:09:40.714 Firmware Activation Without Reset: N/A 00:09:40.714 Multiple Update Detection Support: N/A 00:09:40.714 Firmware Update Granularity: No Information Provided 00:09:40.714 Per-Namespace SMART Log: Yes 00:09:40.714 Asymmetric Namespace Access Log Page: Not Supported 00:09:40.714 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.714 Command Effects Log Page: Supported 00:09:40.714 Get Log Page Extended Data: Supported 00:09:40.714 Telemetry Log Pages: Not Supported 00:09:40.714 Persistent Event Log Pages: Not Supported 00:09:40.714 Supported Log Pages Log Page: May Support 00:09:40.714 Commands Supported & Effects Log Page: Not Supported 00:09:40.714 Feature Identifiers & Effects Log Page:May Support 00:09:40.714 NVMe-MI Commands & Effects Log Page: May Support 00:09:40.714 Data Area 4 for Telemetry Log: Not Supported 00:09:40.714 Error Log Page Entries Supported: 1 00:09:40.714 Keep Alive: Not Supported 00:09:40.714 00:09:40.714 NVM Command Set Attributes 00:09:40.714 ========================== 00:09:40.714 Submission Queue Entry Size 00:09:40.714 Max: 64 00:09:40.714 Min: 64 00:09:40.714 Completion Queue Entry Size 00:09:40.714 Max: 16 00:09:40.714 Min: 16 00:09:40.714 Number of Namespaces: 256 00:09:40.714 Compare Command: Supported 00:09:40.714 Write Uncorrectable Command: Not Supported 00:09:40.714 Dataset Management Command: Supported 00:09:40.714 Write Zeroes Command: Supported 00:09:40.714 Set Features Save Field: Supported 00:09:40.714 Reservations: Not Supported 00:09:40.714 Timestamp: Supported 00:09:40.714 Copy: Supported 00:09:40.714 Volatile Write Cache: Present 00:09:40.714 Atomic Write Unit (Normal): 1 00:09:40.714 Atomic Write Unit (PFail): 1 00:09:40.714 Atomic Compare & Write Unit: 1 00:09:40.714 Fused Compare & Write: Not Supported 00:09:40.714 Scatter-Gather List 00:09:40.714 SGL Command Set: Supported 00:09:40.714 SGL Keyed: Not Supported 00:09:40.714 SGL Bit Bucket Descriptor: Not Supported 00:09:40.714 SGL Metadata Pointer: Not Supported 00:09:40.714 Oversized SGL: Not Supported 00:09:40.714 SGL Metadata Address: Not Supported 00:09:40.714 SGL Offset: Not Supported 00:09:40.714 Transport SGL Data Block: Not Supported 00:09:40.714 Replay Protected Memory Block: Not Supported 00:09:40.714 00:09:40.714 Firmware Slot Information 00:09:40.714 ========================= 00:09:40.714 Active slot: 1 00:09:40.714 Slot 1 Firmware Revision: 1.0 00:09:40.714 00:09:40.714 00:09:40.714 Commands Supported and Effects 00:09:40.714 ============================== 00:09:40.714 Admin Commands 00:09:40.714 -------------- 00:09:40.714 Delete I/O Submission Queue (00h): Supported 00:09:40.714 Create I/O Submission Queue (01h): Supported 00:09:40.714 Get Log Page (02h): Supported 00:09:40.714 Delete I/O Completion Queue (04h): Supported 00:09:40.714 Create I/O Completion Queue (05h): Supported 00:09:40.714 Identify (06h): Supported 00:09:40.714 Abort (08h): Supported 00:09:40.714 Set Features (09h): Supported 00:09:40.714 Get Features (0Ah): Supported 00:09:40.714 Asynchronous Event Request (0Ch): Supported 00:09:40.714 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:40.714 Directive Send (19h): Supported 00:09:40.714 Directive Receive (1Ah): Supported 00:09:40.714 Virtualization Management (1Ch): Supported 00:09:40.714 Doorbell Buffer Config (7Ch): Supported 00:09:40.714 Format NVM (80h): Supported LBA-Change 00:09:40.714 I/O Commands 00:09:40.714 ------------ 00:09:40.714 Flush (00h): Supported LBA-Change 00:09:40.714 Write (01h): Supported LBA-Change 00:09:40.714 Read (02h): Supported 00:09:40.714 Compare (05h): Supported 00:09:40.714 Write Zeroes (08h): Supported LBA-Change 00:09:40.714 Dataset Management (09h): Supported LBA-Change 00:09:40.714 Unknown (0Ch): Supported 00:09:40.714 Unknown (12h): Supported 00:09:40.714 Copy (19h): Supported LBA-Change 00:09:40.714 Unknown (1Dh): Supported LBA-Change 00:09:40.714 00:09:40.714 Error Log 00:09:40.714 ========= 00:09:40.714 00:09:40.714 Arbitration 00:09:40.714 =========== 00:09:40.714 Arbitration Burst: no limit 00:09:40.714 00:09:40.714 Power Management 00:09:40.714 ================ 00:09:40.714 Number of Power States: 1 00:09:40.714 Current Power State: Power State #0 00:09:40.714 Power State #0: 00:09:40.714 Max Power: 25.00 W 00:09:40.714 Non-Operational State: Operational 00:09:40.714 Entry Latency: 16 microseconds 00:09:40.714 Exit Latency: 4 microseconds 00:09:40.714 Relative Read Throughput: 0 00:09:40.714 Relative Read Latency: 0 00:09:40.714 Relative Write Throughput: 0 00:09:40.714 Relative Write Latency: 0 00:09:40.714 Idle Power: Not Reported 00:09:40.714 Active Power: Not Reported 00:09:40.714 Non-Operational Permissive Mode: Not Supported 00:09:40.714 00:09:40.714 Health Information 00:09:40.714 ================== 00:09:40.714 Critical Warnings: 00:09:40.714 Available Spare Space: OK 00:09:40.714 Temperature: OK 00:09:40.714 Device Reliability: OK 00:09:40.714 Read Only: No 00:09:40.714 Volatile Memory Backup: OK 00:09:40.714 Current Temperature: 323 Kelvin (50 Celsius) 00:09:40.714 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:40.714 Available Spare: 0% 00:09:40.714 Available Spare Threshold: 0% 00:09:40.714 Life Percentage Used: 0% 00:09:40.714 Data Units Read: 779 00:09:40.714 Data Units Written: 708 00:09:40.714 Host Read Commands: 34960 00:09:40.714 Host Write Commands: 34383 00:09:40.714 Controller Busy Time: 0 minutes 00:09:40.714 Power Cycles: 0 00:09:40.714 Power On Hours: 0 hours 00:09:40.714 Unsafe Shutdowns: 0 00:09:40.714 Unrecoverable Media Errors: 0 00:09:40.714 Lifetime Error Log Entries: 0 00:09:40.714 Warning Temperature Time: 0 minutes 00:09:40.714 Critical Temperature Time: 0 minutes 00:09:40.714 00:09:40.714 Number of Queues 00:09:40.714 ================ 00:09:40.714 Number of I/O Submission Queues: 64 00:09:40.714 Number of I/O Completion Queues: 64 00:09:40.714 00:09:40.714 ZNS Specific Controller Data 00:09:40.714 ============================ 00:09:40.714 Zone Append Size Limit: 0 00:09:40.714 00:09:40.714 00:09:40.714 Active Namespaces 00:09:40.714 ================= 00:09:40.714 Namespace ID:1 00:09:40.714 Error Recovery Timeout: Unlimited 00:09:40.714 Command Set Identifier: NVM (00h) 00:09:40.714 Deallocate: Supported 00:09:40.714 Deallocated/Unwritten Error: Supported 00:09:40.714 Deallocated Read Value: All 0x00 00:09:40.714 Deallocate in Write Zeroes: Not Supported 00:09:40.714 Deallocated Guard Field: 0xFFFF 00:09:40.714 Flush: Supported 00:09:40.714 Reservation: Not Supported 00:09:40.714 Namespace Sharing Capabilities: Multiple Controllers 00:09:40.714 Size (in LBAs): 262144 (1GiB) 00:09:40.714 Capacity (in LBAs): 262144 (1GiB) 00:09:40.714 Utilization (in LBAs): 262144 (1GiB) 00:09:40.714 Thin Provisioning: Not Supported 00:09:40.714 Per-NS Atomic Units: No 00:09:40.714 Maximum Single Source Range Length: 128 00:09:40.714 Maximum Copy Length: 128 00:09:40.715 Maximum Source Range Count: 128 00:09:40.715 NGUID/EUI64 Never Reused: No 00:09:40.715 Namespace Write Protected: No 00:09:40.715 Endurance group ID: 1 00:09:40.715 Number of LBA Formats: 8 00:09:40.715 Current LBA Format: LBA Format #04 00:09:40.715 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:40.715 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:40.715 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:40.715 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:40.715 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:40.715 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:40.715 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:40.715 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:40.715 00:09:40.715 Get Feature FDP: 00:09:40.715 ================ 00:09:40.715 Enabled: Yes 00:09:40.715 FDP configuration index: 0 00:09:40.715 00:09:40.715 FDP configurations log page 00:09:40.715 =========================== 00:09:40.715 Number of FDP configurations: 1 00:09:40.715 Version: 0 00:09:40.715 Size: 112 00:09:40.715 FDP Configuration Descriptor: 0 00:09:40.715 Descriptor Size: 96 00:09:40.715 Reclaim Group Identifier format: 2 00:09:40.715 FDP Volatile Write Cache: Not Present 00:09:40.715 FDP Configuration: Valid 00:09:40.715 Vendor Specific Size: 0 00:09:40.715 Number of Reclaim Groups: 2 00:09:40.715 Number of Recalim Unit Handles: 8 00:09:40.715 Max Placement Identifiers: 128 00:09:40.715 Number of Namespaces Suppprted: 256 00:09:40.715 Reclaim unit Nominal Size: 6000000 bytes 00:09:40.715 Estimated Reclaim Unit Time Limit: Not Reported 00:09:40.715 RUH Desc #000: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #001: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #002: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #003: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #004: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #005: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #006: RUH Type: Initially Isolated 00:09:40.715 RUH Desc #007: RUH Type: Initially Isolated 00:09:40.715 00:09:40.715 FDP reclaim unit handle usage log page 00:09:40.715 ====================================== 00:09:40.715 Number of Reclaim Unit Handles: 8 00:09:40.715 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:40.715 RUH Usage Desc #001: RUH Attributes: Unused 00:09:40.715 RUH Usage Desc #002: RUH Attributes: Unused 00:09:40.715 RUH Usage Desc #003: RUH Attributes: Unused 00:09:40.715 RUH Usage Desc #004: RUH Attributes: Unused 00:09:40.715 RUH Usage Desc #005: RUH Attributes: Unused 00:09:40.715 RUH Usage Desc #006: RUH Attributes: Unused 00:09:40.715 RUH Usage Desc #007: RUH Attributes: Unused 00:09:40.715 00:09:40.715 FDP statistics log page 00:09:40.715 ======================= 00:09:40.715 Host bytes with metadata written: 436510720 00:09:40.715 Media bytes with metadata written: 436576256 00:09:40.715 Media bytes erased: 0 00:09:40.715 00:09:40.715 FDP events log page 00:09:40.715 =================== 00:09:40.715 Number of FDP events: 0 00:09:40.715 00:09:40.715 NVM Specific Namespace Data 00:09:40.715 =========================== 00:09:40.715 Logical Block Storage Tag Mask: 0 00:09:40.715 Protection Information Capabilities: 00:09:40.715 16b Guard Protection Information Storage Tag Support: No 00:09:40.715 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:40.715 Storage Tag Check Read Support: No 00:09:40.715 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:40.715 00:09:40.715 real 0m1.710s 00:09:40.715 user 0m0.713s 00:09:40.715 sys 0m0.798s 00:09:40.715 ************************************ 00:09:40.715 END TEST nvme_identify 00:09:40.715 ************************************ 00:09:40.715 18:54:11 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.715 18:54:11 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 18:54:11 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:40.715 18:54:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.715 18:54:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.715 18:54:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 ************************************ 00:09:40.715 START TEST nvme_perf 00:09:40.715 ************************************ 00:09:40.715 18:54:11 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:40.715 18:54:11 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:42.142 Initializing NVMe Controllers 00:09:42.142 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:42.142 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:42.142 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:42.142 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:42.142 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:42.142 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:42.142 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:42.142 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:42.142 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:42.142 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:42.142 Initialization complete. Launching workers. 00:09:42.142 ======================================================== 00:09:42.142 Latency(us) 00:09:42.142 Device Information : IOPS MiB/s Average min max 00:09:42.142 PCIE (0000:00:10.0) NSID 1 from core 0: 12555.51 147.13 10216.90 7634.51 47130.69 00:09:42.142 PCIE (0000:00:11.0) NSID 1 from core 0: 12555.51 147.13 10198.31 7763.38 44737.50 00:09:42.142 PCIE (0000:00:13.0) NSID 1 from core 0: 12555.51 147.13 10177.36 7838.54 42934.74 00:09:42.142 PCIE (0000:00:12.0) NSID 1 from core 0: 12619.24 147.88 10101.04 7863.40 35245.24 00:09:42.142 PCIE (0000:00:12.0) NSID 2 from core 0: 12619.24 147.88 10066.41 7855.69 32065.19 00:09:42.142 PCIE (0000:00:12.0) NSID 3 from core 0: 12619.24 147.88 10033.40 7763.31 28126.91 00:09:42.142 ======================================================== 00:09:42.142 Total : 75524.23 885.05 10132.07 7634.51 47130.69 00:09:42.142 00:09:42.142 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:42.142 ================================================================================= 00:09:42.142 1.00000% : 7923.898us 00:09:42.142 10.00000% : 8340.945us 00:09:42.142 25.00000% : 8877.149us 00:09:42.142 50.00000% : 9413.353us 00:09:42.142 75.00000% : 10366.604us 00:09:42.142 90.00000% : 12034.793us 00:09:42.142 95.00000% : 14239.185us 00:09:42.142 98.00000% : 17158.516us 00:09:42.142 99.00000% : 37653.411us 00:09:42.142 99.50000% : 45041.105us 00:09:42.142 99.90000% : 46709.295us 00:09:42.142 99.99000% : 47185.920us 00:09:42.142 99.99900% : 47185.920us 00:09:42.142 99.99990% : 47185.920us 00:09:42.142 99.99999% : 47185.920us 00:09:42.142 00:09:42.142 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:42.142 ================================================================================= 00:09:42.142 1.00000% : 8043.055us 00:09:42.142 10.00000% : 8400.524us 00:09:42.142 25.00000% : 8877.149us 00:09:42.142 50.00000% : 9413.353us 00:09:42.142 75.00000% : 10307.025us 00:09:42.142 90.00000% : 12094.371us 00:09:42.142 95.00000% : 14239.185us 00:09:42.142 98.00000% : 17158.516us 00:09:42.142 99.00000% : 35508.596us 00:09:42.142 99.50000% : 42657.978us 00:09:42.142 99.90000% : 44564.480us 00:09:42.142 99.99000% : 44802.793us 00:09:42.142 99.99900% : 44802.793us 00:09:42.142 99.99990% : 44802.793us 00:09:42.142 99.99999% : 44802.793us 00:09:42.142 00:09:42.142 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:42.142 ================================================================================= 00:09:42.142 1.00000% : 8043.055us 00:09:42.142 10.00000% : 8400.524us 00:09:42.142 25.00000% : 8877.149us 00:09:42.142 50.00000% : 9413.353us 00:09:42.142 75.00000% : 10247.447us 00:09:42.142 90.00000% : 12094.371us 00:09:42.142 95.00000% : 14656.233us 00:09:42.142 98.00000% : 17158.516us 00:09:42.142 99.00000% : 33840.407us 00:09:42.142 99.50000% : 40989.789us 00:09:42.142 99.90000% : 42657.978us 00:09:42.142 99.99000% : 43134.604us 00:09:42.142 99.99900% : 43134.604us 00:09:42.142 99.99990% : 43134.604us 00:09:42.142 99.99999% : 43134.604us 00:09:42.142 00:09:42.142 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:42.142 ================================================================================= 00:09:42.142 1.00000% : 8043.055us 00:09:42.142 10.00000% : 8400.524us 00:09:42.142 25.00000% : 8877.149us 00:09:42.142 50.00000% : 9413.353us 00:09:42.142 75.00000% : 10307.025us 00:09:42.142 90.00000% : 12034.793us 00:09:42.142 95.00000% : 14715.811us 00:09:42.142 98.00000% : 17039.360us 00:09:42.142 99.00000% : 24546.211us 00:09:42.142 99.50000% : 32887.156us 00:09:42.142 99.90000% : 35031.971us 00:09:42.142 99.99000% : 35270.284us 00:09:42.142 99.99900% : 35270.284us 00:09:42.142 99.99990% : 35270.284us 00:09:42.142 99.99999% : 35270.284us 00:09:42.142 00:09:42.142 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:42.142 ================================================================================= 00:09:42.142 1.00000% : 8043.055us 00:09:42.142 10.00000% : 8400.524us 00:09:42.142 25.00000% : 8877.149us 00:09:42.142 50.00000% : 9413.353us 00:09:42.142 75.00000% : 10307.025us 00:09:42.142 90.00000% : 12034.793us 00:09:42.142 95.00000% : 14715.811us 00:09:42.142 98.00000% : 17039.360us 00:09:42.142 99.00000% : 20852.364us 00:09:42.142 99.50000% : 29074.153us 00:09:42.142 99.90000% : 31695.593us 00:09:42.142 99.99000% : 32172.218us 00:09:42.142 99.99900% : 32172.218us 00:09:42.142 99.99990% : 32172.218us 00:09:42.142 99.99999% : 32172.218us 00:09:42.142 00:09:42.142 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:42.142 ================================================================================= 00:09:42.142 1.00000% : 8043.055us 00:09:42.142 10.00000% : 8400.524us 00:09:42.142 25.00000% : 8877.149us 00:09:42.142 50.00000% : 9413.353us 00:09:42.142 75.00000% : 10366.604us 00:09:42.142 90.00000% : 12094.371us 00:09:42.142 95.00000% : 14120.029us 00:09:42.142 98.00000% : 17039.360us 00:09:42.142 99.00000% : 18350.080us 00:09:42.142 99.50000% : 25380.305us 00:09:42.142 99.90000% : 27644.276us 00:09:42.142 99.99000% : 28120.902us 00:09:42.142 99.99900% : 28240.058us 00:09:42.142 99.99990% : 28240.058us 00:09:42.142 99.99999% : 28240.058us 00:09:42.142 00:09:42.143 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:42.143 ============================================================================== 00:09:42.143 Range in us Cumulative IO count 00:09:42.143 7626.007 - 7685.585: 0.0238% ( 3) 00:09:42.143 7685.585 - 7745.164: 0.0714% ( 6) 00:09:42.143 7745.164 - 7804.742: 0.1586% ( 11) 00:09:42.143 7804.742 - 7864.320: 0.5314% ( 47) 00:09:42.143 7864.320 - 7923.898: 1.1580% ( 79) 00:09:42.143 7923.898 - 7983.476: 1.9670% ( 102) 00:09:42.143 7983.476 - 8043.055: 3.1885% ( 154) 00:09:42.143 8043.055 - 8102.633: 4.4971% ( 165) 00:09:42.143 8102.633 - 8162.211: 5.7820% ( 162) 00:09:42.143 8162.211 - 8221.789: 7.0987% ( 166) 00:09:42.143 8221.789 - 8281.367: 8.5184% ( 179) 00:09:42.143 8281.367 - 8340.945: 10.0095% ( 188) 00:09:42.143 8340.945 - 8400.524: 11.5086% ( 189) 00:09:42.143 8400.524 - 8460.102: 13.0314% ( 192) 00:09:42.143 8460.102 - 8519.680: 14.5225% ( 188) 00:09:42.143 8519.680 - 8579.258: 16.2278% ( 215) 00:09:42.143 8579.258 - 8638.836: 17.8379% ( 203) 00:09:42.143 8638.836 - 8698.415: 19.6542% ( 229) 00:09:42.143 8698.415 - 8757.993: 21.8036% ( 271) 00:09:42.143 8757.993 - 8817.571: 24.2148% ( 304) 00:09:42.143 8817.571 - 8877.149: 26.8004% ( 326) 00:09:42.143 8877.149 - 8936.727: 29.6320% ( 357) 00:09:42.143 8936.727 - 8996.305: 32.4635% ( 357) 00:09:42.143 8996.305 - 9055.884: 35.4140% ( 372) 00:09:42.143 9055.884 - 9115.462: 38.4756% ( 386) 00:09:42.143 9115.462 - 9175.040: 41.4816% ( 379) 00:09:42.143 9175.040 - 9234.618: 44.1942% ( 342) 00:09:42.143 9234.618 - 9294.196: 46.8512% ( 335) 00:09:42.143 9294.196 - 9353.775: 49.2465% ( 302) 00:09:42.143 9353.775 - 9413.353: 51.2214% ( 249) 00:09:42.143 9413.353 - 9472.931: 53.3550% ( 269) 00:09:42.143 9472.931 - 9532.509: 55.3458% ( 251) 00:09:42.143 9532.509 - 9592.087: 57.2573% ( 241) 00:09:42.143 9592.087 - 9651.665: 59.0577% ( 227) 00:09:42.143 9651.665 - 9711.244: 60.8027% ( 220) 00:09:42.143 9711.244 - 9770.822: 62.6190% ( 229) 00:09:42.143 9770.822 - 9830.400: 64.4511% ( 231) 00:09:42.143 9830.400 - 9889.978: 66.2595% ( 228) 00:09:42.143 9889.978 - 9949.556: 67.9648% ( 215) 00:09:42.143 9949.556 - 10009.135: 69.5828% ( 204) 00:09:42.143 10009.135 - 10068.713: 71.1612% ( 199) 00:09:42.143 10068.713 - 10128.291: 72.3905% ( 155) 00:09:42.143 10128.291 - 10187.869: 73.3582% ( 122) 00:09:42.143 10187.869 - 10247.447: 74.1910% ( 105) 00:09:42.143 10247.447 - 10307.025: 74.8096% ( 78) 00:09:42.143 10307.025 - 10366.604: 75.3331% ( 66) 00:09:42.143 10366.604 - 10426.182: 75.8169% ( 61) 00:09:42.143 10426.182 - 10485.760: 76.3325% ( 65) 00:09:42.143 10485.760 - 10545.338: 76.8163% ( 61) 00:09:42.143 10545.338 - 10604.916: 77.3001% ( 61) 00:09:42.143 10604.916 - 10664.495: 77.7602% ( 58) 00:09:42.143 10664.495 - 10724.073: 78.1885% ( 54) 00:09:42.143 10724.073 - 10783.651: 78.6723% ( 61) 00:09:42.143 10783.651 - 10843.229: 79.1244% ( 57) 00:09:42.143 10843.229 - 10902.807: 79.6082% ( 61) 00:09:42.143 10902.807 - 10962.385: 80.1713% ( 71) 00:09:42.143 10962.385 - 11021.964: 80.7820% ( 77) 00:09:42.143 11021.964 - 11081.542: 81.4007% ( 78) 00:09:42.143 11081.542 - 11141.120: 82.0273% ( 79) 00:09:42.143 11141.120 - 11200.698: 82.6618% ( 80) 00:09:42.143 11200.698 - 11260.276: 83.3122% ( 82) 00:09:42.143 11260.276 - 11319.855: 83.9626% ( 82) 00:09:42.143 11319.855 - 11379.433: 84.5733% ( 77) 00:09:42.143 11379.433 - 11439.011: 85.1285% ( 70) 00:09:42.143 11439.011 - 11498.589: 85.7392% ( 77) 00:09:42.143 11498.589 - 11558.167: 86.2865% ( 69) 00:09:42.143 11558.167 - 11617.745: 86.7782% ( 62) 00:09:42.143 11617.745 - 11677.324: 87.3414% ( 71) 00:09:42.143 11677.324 - 11736.902: 87.8014% ( 58) 00:09:42.143 11736.902 - 11796.480: 88.3090% ( 64) 00:09:42.143 11796.480 - 11856.058: 88.7611% ( 57) 00:09:42.143 11856.058 - 11915.636: 89.1973% ( 55) 00:09:42.143 11915.636 - 11975.215: 89.6336% ( 55) 00:09:42.143 11975.215 - 12034.793: 90.0063% ( 47) 00:09:42.143 12034.793 - 12094.371: 90.3236% ( 40) 00:09:42.143 12094.371 - 12153.949: 90.6250% ( 38) 00:09:42.143 12153.949 - 12213.527: 90.8709% ( 31) 00:09:42.143 12213.527 - 12273.105: 91.1564% ( 36) 00:09:42.143 12273.105 - 12332.684: 91.3944% ( 30) 00:09:42.143 12332.684 - 12392.262: 91.5847% ( 24) 00:09:42.143 12392.262 - 12451.840: 91.7751% ( 24) 00:09:42.143 12451.840 - 12511.418: 91.9654% ( 24) 00:09:42.143 12511.418 - 12570.996: 92.1003% ( 17) 00:09:42.143 12570.996 - 12630.575: 92.2985% ( 25) 00:09:42.143 12630.575 - 12690.153: 92.4730% ( 22) 00:09:42.143 12690.153 - 12749.731: 92.6158% ( 18) 00:09:42.143 12749.731 - 12809.309: 92.7427% ( 16) 00:09:42.143 12809.309 - 12868.887: 92.9251% ( 23) 00:09:42.143 12868.887 - 12928.465: 93.0441% ( 15) 00:09:42.143 12928.465 - 12988.044: 93.1869% ( 18) 00:09:42.143 12988.044 - 13047.622: 93.2662% ( 10) 00:09:42.143 13047.622 - 13107.200: 93.3852% ( 15) 00:09:42.143 13107.200 - 13166.778: 93.4724% ( 11) 00:09:42.143 13166.778 - 13226.356: 93.5517% ( 10) 00:09:42.143 13226.356 - 13285.935: 93.6310% ( 10) 00:09:42.143 13285.935 - 13345.513: 93.7341% ( 13) 00:09:42.143 13345.513 - 13405.091: 93.8055% ( 9) 00:09:42.143 13405.091 - 13464.669: 93.8848% ( 10) 00:09:42.143 13464.669 - 13524.247: 93.9562% ( 9) 00:09:42.143 13524.247 - 13583.825: 94.0514% ( 12) 00:09:42.143 13583.825 - 13643.404: 94.1466% ( 12) 00:09:42.143 13643.404 - 13702.982: 94.2021% ( 7) 00:09:42.143 13702.982 - 13762.560: 94.2973% ( 12) 00:09:42.143 13762.560 - 13822.138: 94.3766% ( 10) 00:09:42.143 13822.138 - 13881.716: 94.4956% ( 15) 00:09:42.143 13881.716 - 13941.295: 94.5828% ( 11) 00:09:42.143 13941.295 - 14000.873: 94.6701% ( 11) 00:09:42.143 14000.873 - 14060.451: 94.7335% ( 8) 00:09:42.143 14060.451 - 14120.029: 94.8366% ( 13) 00:09:42.143 14120.029 - 14179.607: 94.9159% ( 10) 00:09:42.143 14179.607 - 14239.185: 95.0190% ( 13) 00:09:42.143 14239.185 - 14298.764: 95.0825% ( 8) 00:09:42.143 14298.764 - 14358.342: 95.1459% ( 8) 00:09:42.143 14358.342 - 14417.920: 95.2094% ( 8) 00:09:42.143 14417.920 - 14477.498: 95.2728% ( 8) 00:09:42.143 14477.498 - 14537.076: 95.3442% ( 9) 00:09:42.143 14537.076 - 14596.655: 95.4077% ( 8) 00:09:42.143 14596.655 - 14656.233: 95.4473% ( 5) 00:09:42.143 14656.233 - 14715.811: 95.4949% ( 6) 00:09:42.143 14715.811 - 14775.389: 95.5425% ( 6) 00:09:42.143 14775.389 - 14834.967: 95.5901% ( 6) 00:09:42.143 14834.967 - 14894.545: 95.6298% ( 5) 00:09:42.143 14894.545 - 14954.124: 95.6853% ( 7) 00:09:42.143 14954.124 - 15013.702: 95.7567% ( 9) 00:09:42.143 15013.702 - 15073.280: 95.7963% ( 5) 00:09:42.143 15073.280 - 15132.858: 95.8598% ( 8) 00:09:42.143 15132.858 - 15192.436: 95.8994% ( 5) 00:09:42.143 15192.436 - 15252.015: 95.9470% ( 6) 00:09:42.143 15252.015 - 15371.171: 96.0898% ( 18) 00:09:42.143 15371.171 - 15490.327: 96.2405% ( 19) 00:09:42.143 15490.327 - 15609.484: 96.3991% ( 20) 00:09:42.143 15609.484 - 15728.640: 96.5339% ( 17) 00:09:42.143 15728.640 - 15847.796: 96.6609% ( 16) 00:09:42.143 15847.796 - 15966.953: 96.8036% ( 18) 00:09:42.143 15966.953 - 16086.109: 96.9385% ( 17) 00:09:42.143 16086.109 - 16205.265: 97.0812% ( 18) 00:09:42.143 16205.265 - 16324.422: 97.2398% ( 20) 00:09:42.143 16324.422 - 16443.578: 97.3747% ( 17) 00:09:42.143 16443.578 - 16562.735: 97.5174% ( 18) 00:09:42.143 16562.735 - 16681.891: 97.6602% ( 18) 00:09:42.143 16681.891 - 16801.047: 97.7713% ( 14) 00:09:42.143 16801.047 - 16920.204: 97.8823% ( 14) 00:09:42.143 16920.204 - 17039.360: 97.9854% ( 13) 00:09:42.143 17039.360 - 17158.516: 98.0806% ( 12) 00:09:42.143 17158.516 - 17277.673: 98.1916% ( 14) 00:09:42.143 17277.673 - 17396.829: 98.2947% ( 13) 00:09:42.143 17396.829 - 17515.985: 98.4137% ( 15) 00:09:42.143 17515.985 - 17635.142: 98.4851% ( 9) 00:09:42.143 17635.142 - 17754.298: 98.6120% ( 16) 00:09:42.143 17754.298 - 17873.455: 98.6913% ( 10) 00:09:42.143 17873.455 - 17992.611: 98.7786% ( 11) 00:09:42.143 17992.611 - 18111.767: 98.8658% ( 11) 00:09:42.143 18111.767 - 18230.924: 98.9055% ( 5) 00:09:42.143 18230.924 - 18350.080: 98.9134% ( 1) 00:09:42.143 18350.080 - 18469.236: 98.9372% ( 3) 00:09:42.143 18469.236 - 18588.393: 98.9530% ( 2) 00:09:42.143 18588.393 - 18707.549: 98.9768% ( 3) 00:09:42.143 18707.549 - 18826.705: 98.9848% ( 1) 00:09:42.143 37415.098 - 37653.411: 99.0086% ( 3) 00:09:42.143 37653.411 - 37891.724: 99.0562% ( 6) 00:09:42.143 37891.724 - 38130.036: 99.1117% ( 7) 00:09:42.143 38130.036 - 38368.349: 99.1672% ( 7) 00:09:42.143 38368.349 - 38606.662: 99.2069% ( 5) 00:09:42.143 38606.662 - 38844.975: 99.2624% ( 7) 00:09:42.143 38844.975 - 39083.287: 99.3100% ( 6) 00:09:42.143 39083.287 - 39321.600: 99.3496% ( 5) 00:09:42.143 39321.600 - 39559.913: 99.4051% ( 7) 00:09:42.143 39559.913 - 39798.225: 99.4527% ( 6) 00:09:42.143 39798.225 - 40036.538: 99.4924% ( 5) 00:09:42.143 44802.793 - 45041.105: 99.5241% ( 4) 00:09:42.143 45041.105 - 45279.418: 99.5717% ( 6) 00:09:42.143 45279.418 - 45517.731: 99.6272% ( 7) 00:09:42.143 45517.731 - 45756.044: 99.6827% ( 7) 00:09:42.143 45756.044 - 45994.356: 99.7383% ( 7) 00:09:42.143 45994.356 - 46232.669: 99.7938% ( 7) 00:09:42.143 46232.669 - 46470.982: 99.8572% ( 8) 00:09:42.143 46470.982 - 46709.295: 99.9048% ( 6) 00:09:42.143 46709.295 - 46947.607: 99.9603% ( 7) 00:09:42.143 46947.607 - 47185.920: 100.0000% ( 5) 00:09:42.143 00:09:42.143 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:42.143 ============================================================================== 00:09:42.143 Range in us Cumulative IO count 00:09:42.143 7745.164 - 7804.742: 0.0238% ( 3) 00:09:42.143 7804.742 - 7864.320: 0.0793% ( 7) 00:09:42.143 7864.320 - 7923.898: 0.2459% ( 21) 00:09:42.144 7923.898 - 7983.476: 0.6424% ( 50) 00:09:42.144 7983.476 - 8043.055: 1.4118% ( 97) 00:09:42.144 8043.055 - 8102.633: 2.5381% ( 142) 00:09:42.144 8102.633 - 8162.211: 3.8864% ( 170) 00:09:42.144 8162.211 - 8221.789: 5.4251% ( 194) 00:09:42.144 8221.789 - 8281.367: 6.9797% ( 196) 00:09:42.144 8281.367 - 8340.945: 8.6056% ( 205) 00:09:42.144 8340.945 - 8400.524: 10.3347% ( 218) 00:09:42.144 8400.524 - 8460.102: 12.1352% ( 227) 00:09:42.144 8460.102 - 8519.680: 13.9039% ( 223) 00:09:42.144 8519.680 - 8579.258: 15.7043% ( 227) 00:09:42.144 8579.258 - 8638.836: 17.5999% ( 239) 00:09:42.144 8638.836 - 8698.415: 19.4956% ( 239) 00:09:42.144 8698.415 - 8757.993: 21.4705% ( 249) 00:09:42.144 8757.993 - 8817.571: 23.7310% ( 285) 00:09:42.144 8817.571 - 8877.149: 26.3166% ( 326) 00:09:42.144 8877.149 - 8936.727: 29.1402% ( 356) 00:09:42.144 8936.727 - 8996.305: 32.2018% ( 386) 00:09:42.144 8996.305 - 9055.884: 35.4457% ( 409) 00:09:42.144 9055.884 - 9115.462: 38.5628% ( 393) 00:09:42.144 9115.462 - 9175.040: 41.4102% ( 359) 00:09:42.144 9175.040 - 9234.618: 44.0514% ( 333) 00:09:42.144 9234.618 - 9294.196: 46.5498% ( 315) 00:09:42.144 9294.196 - 9353.775: 48.8658% ( 292) 00:09:42.144 9353.775 - 9413.353: 51.1818% ( 292) 00:09:42.144 9413.353 - 9472.931: 53.4661% ( 288) 00:09:42.144 9472.931 - 9532.509: 55.7107% ( 283) 00:09:42.144 9532.509 - 9592.087: 57.9394% ( 281) 00:09:42.144 9592.087 - 9651.665: 60.1364% ( 277) 00:09:42.144 9651.665 - 9711.244: 62.2859% ( 271) 00:09:42.144 9711.244 - 9770.822: 64.3639% ( 262) 00:09:42.144 9770.822 - 9830.400: 66.2992% ( 244) 00:09:42.144 9830.400 - 9889.978: 68.1551% ( 234) 00:09:42.144 9889.978 - 9949.556: 69.8049% ( 208) 00:09:42.144 9949.556 - 10009.135: 71.1929% ( 175) 00:09:42.144 10009.135 - 10068.713: 72.4143% ( 154) 00:09:42.144 10068.713 - 10128.291: 73.3740% ( 121) 00:09:42.144 10128.291 - 10187.869: 74.1037% ( 92) 00:09:42.144 10187.869 - 10247.447: 74.6272% ( 66) 00:09:42.144 10247.447 - 10307.025: 75.1666% ( 68) 00:09:42.144 10307.025 - 10366.604: 75.6821% ( 65) 00:09:42.144 10366.604 - 10426.182: 76.2135% ( 67) 00:09:42.144 10426.182 - 10485.760: 76.7211% ( 64) 00:09:42.144 10485.760 - 10545.338: 77.1098% ( 49) 00:09:42.144 10545.338 - 10604.916: 77.4350% ( 41) 00:09:42.144 10604.916 - 10664.495: 77.7998% ( 46) 00:09:42.144 10664.495 - 10724.073: 78.1805% ( 48) 00:09:42.144 10724.073 - 10783.651: 78.5533% ( 47) 00:09:42.144 10783.651 - 10843.229: 78.9975% ( 56) 00:09:42.144 10843.229 - 10902.807: 79.5130% ( 65) 00:09:42.144 10902.807 - 10962.385: 80.0365% ( 66) 00:09:42.144 10962.385 - 11021.964: 80.6313% ( 75) 00:09:42.144 11021.964 - 11081.542: 81.2103% ( 73) 00:09:42.144 11081.542 - 11141.120: 81.8449% ( 80) 00:09:42.144 11141.120 - 11200.698: 82.4714% ( 79) 00:09:42.144 11200.698 - 11260.276: 83.1139% ( 81) 00:09:42.144 11260.276 - 11319.855: 83.7326% ( 78) 00:09:42.144 11319.855 - 11379.433: 84.3671% ( 80) 00:09:42.144 11379.433 - 11439.011: 85.0333% ( 84) 00:09:42.144 11439.011 - 11498.589: 85.6916% ( 83) 00:09:42.144 11498.589 - 11558.167: 86.3103% ( 78) 00:09:42.144 11558.167 - 11617.745: 86.8417% ( 67) 00:09:42.144 11617.745 - 11677.324: 87.3810% ( 68) 00:09:42.144 11677.324 - 11736.902: 87.9045% ( 66) 00:09:42.144 11736.902 - 11796.480: 88.4042% ( 63) 00:09:42.144 11796.480 - 11856.058: 88.8404% ( 55) 00:09:42.144 11856.058 - 11915.636: 89.2529% ( 52) 00:09:42.144 11915.636 - 11975.215: 89.5860% ( 42) 00:09:42.144 11975.215 - 12034.793: 89.9191% ( 42) 00:09:42.144 12034.793 - 12094.371: 90.1729% ( 32) 00:09:42.144 12094.371 - 12153.949: 90.4346% ( 33) 00:09:42.144 12153.949 - 12213.527: 90.6885% ( 32) 00:09:42.144 12213.527 - 12273.105: 90.9343% ( 31) 00:09:42.144 12273.105 - 12332.684: 91.1881% ( 32) 00:09:42.144 12332.684 - 12392.262: 91.3785% ( 24) 00:09:42.144 12392.262 - 12451.840: 91.5768% ( 25) 00:09:42.144 12451.840 - 12511.418: 91.7909% ( 27) 00:09:42.144 12511.418 - 12570.996: 92.0209% ( 29) 00:09:42.144 12570.996 - 12630.575: 92.2747% ( 32) 00:09:42.144 12630.575 - 12690.153: 92.4968% ( 28) 00:09:42.144 12690.153 - 12749.731: 92.7030% ( 26) 00:09:42.144 12749.731 - 12809.309: 92.8458% ( 18) 00:09:42.144 12809.309 - 12868.887: 92.9569% ( 14) 00:09:42.144 12868.887 - 12928.465: 93.0917% ( 17) 00:09:42.144 12928.465 - 12988.044: 93.1869% ( 12) 00:09:42.144 12988.044 - 13047.622: 93.3058% ( 15) 00:09:42.144 13047.622 - 13107.200: 93.4327% ( 16) 00:09:42.144 13107.200 - 13166.778: 93.5279% ( 12) 00:09:42.144 13166.778 - 13226.356: 93.6390% ( 14) 00:09:42.144 13226.356 - 13285.935: 93.7500% ( 14) 00:09:42.144 13285.935 - 13345.513: 93.8690% ( 15) 00:09:42.144 13345.513 - 13405.091: 93.9562% ( 11) 00:09:42.144 13405.091 - 13464.669: 94.0514% ( 12) 00:09:42.144 13464.669 - 13524.247: 94.1545% ( 13) 00:09:42.144 13524.247 - 13583.825: 94.2497% ( 12) 00:09:42.144 13583.825 - 13643.404: 94.3528% ( 13) 00:09:42.144 13643.404 - 13702.982: 94.4638% ( 14) 00:09:42.144 13702.982 - 13762.560: 94.5511% ( 11) 00:09:42.144 13762.560 - 13822.138: 94.6145% ( 8) 00:09:42.144 13822.138 - 13881.716: 94.6859% ( 9) 00:09:42.144 13881.716 - 13941.295: 94.7652% ( 10) 00:09:42.144 13941.295 - 14000.873: 94.8366% ( 9) 00:09:42.144 14000.873 - 14060.451: 94.8842% ( 6) 00:09:42.144 14060.451 - 14120.029: 94.9397% ( 7) 00:09:42.144 14120.029 - 14179.607: 94.9794% ( 5) 00:09:42.144 14179.607 - 14239.185: 95.0349% ( 7) 00:09:42.144 14239.185 - 14298.764: 95.0904% ( 7) 00:09:42.144 14298.764 - 14358.342: 95.1221% ( 4) 00:09:42.144 14358.342 - 14417.920: 95.1618% ( 5) 00:09:42.144 14417.920 - 14477.498: 95.2332% ( 9) 00:09:42.144 14477.498 - 14537.076: 95.2728% ( 5) 00:09:42.144 14537.076 - 14596.655: 95.3204% ( 6) 00:09:42.144 14596.655 - 14656.233: 95.3601% ( 5) 00:09:42.144 14656.233 - 14715.811: 95.3997% ( 5) 00:09:42.144 14715.811 - 14775.389: 95.4473% ( 6) 00:09:42.144 14775.389 - 14834.967: 95.4870% ( 5) 00:09:42.144 14834.967 - 14894.545: 95.5266% ( 5) 00:09:42.144 14894.545 - 14954.124: 95.5663% ( 5) 00:09:42.144 14954.124 - 15013.702: 95.6060% ( 5) 00:09:42.144 15013.702 - 15073.280: 95.6456% ( 5) 00:09:42.144 15073.280 - 15132.858: 95.6853% ( 5) 00:09:42.144 15132.858 - 15192.436: 95.7487% ( 8) 00:09:42.144 15192.436 - 15252.015: 95.7963% ( 6) 00:09:42.144 15252.015 - 15371.171: 95.9153% ( 15) 00:09:42.144 15371.171 - 15490.327: 96.0263% ( 14) 00:09:42.144 15490.327 - 15609.484: 96.1294% ( 13) 00:09:42.144 15609.484 - 15728.640: 96.2484% ( 15) 00:09:42.144 15728.640 - 15847.796: 96.3912% ( 18) 00:09:42.144 15847.796 - 15966.953: 96.5577% ( 21) 00:09:42.144 15966.953 - 16086.109: 96.7322% ( 22) 00:09:42.144 16086.109 - 16205.265: 96.8988% ( 21) 00:09:42.144 16205.265 - 16324.422: 97.0654% ( 21) 00:09:42.144 16324.422 - 16443.578: 97.2398% ( 22) 00:09:42.144 16443.578 - 16562.735: 97.3905% ( 19) 00:09:42.144 16562.735 - 16681.891: 97.5650% ( 22) 00:09:42.144 16681.891 - 16801.047: 97.7316% ( 21) 00:09:42.144 16801.047 - 16920.204: 97.8347% ( 13) 00:09:42.144 16920.204 - 17039.360: 97.9616% ( 16) 00:09:42.144 17039.360 - 17158.516: 98.0885% ( 16) 00:09:42.144 17158.516 - 17277.673: 98.2075% ( 15) 00:09:42.144 17277.673 - 17396.829: 98.3265% ( 15) 00:09:42.144 17396.829 - 17515.985: 98.4454% ( 15) 00:09:42.144 17515.985 - 17635.142: 98.5644% ( 15) 00:09:42.144 17635.142 - 17754.298: 98.6754% ( 14) 00:09:42.144 17754.298 - 17873.455: 98.7627% ( 11) 00:09:42.144 17873.455 - 17992.611: 98.8420% ( 10) 00:09:42.144 17992.611 - 18111.767: 98.9134% ( 9) 00:09:42.144 18111.767 - 18230.924: 98.9610% ( 6) 00:09:42.144 18230.924 - 18350.080: 98.9848% ( 3) 00:09:42.144 35270.284 - 35508.596: 99.0086% ( 3) 00:09:42.144 35508.596 - 35746.909: 99.0562% ( 6) 00:09:42.144 35746.909 - 35985.222: 99.1117% ( 7) 00:09:42.144 35985.222 - 36223.535: 99.1672% ( 7) 00:09:42.144 36223.535 - 36461.847: 99.2227% ( 7) 00:09:42.144 36461.847 - 36700.160: 99.2703% ( 6) 00:09:42.144 36700.160 - 36938.473: 99.3100% ( 5) 00:09:42.144 36938.473 - 37176.785: 99.3655% ( 7) 00:09:42.144 37176.785 - 37415.098: 99.4131% ( 6) 00:09:42.144 37415.098 - 37653.411: 99.4686% ( 7) 00:09:42.144 37653.411 - 37891.724: 99.4924% ( 3) 00:09:42.144 42419.665 - 42657.978: 99.5003% ( 1) 00:09:42.144 42657.978 - 42896.291: 99.5558% ( 7) 00:09:42.144 42896.291 - 43134.604: 99.6193% ( 8) 00:09:42.144 43134.604 - 43372.916: 99.6669% ( 6) 00:09:42.144 43372.916 - 43611.229: 99.7303% ( 8) 00:09:42.144 43611.229 - 43849.542: 99.7859% ( 7) 00:09:42.144 43849.542 - 44087.855: 99.8414% ( 7) 00:09:42.144 44087.855 - 44326.167: 99.8969% ( 7) 00:09:42.144 44326.167 - 44564.480: 99.9603% ( 8) 00:09:42.144 44564.480 - 44802.793: 100.0000% ( 5) 00:09:42.144 00:09:42.144 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:42.144 ============================================================================== 00:09:42.144 Range in us Cumulative IO count 00:09:42.144 7804.742 - 7864.320: 0.0238% ( 3) 00:09:42.144 7864.320 - 7923.898: 0.1428% ( 15) 00:09:42.144 7923.898 - 7983.476: 0.5155% ( 47) 00:09:42.144 7983.476 - 8043.055: 1.3008% ( 99) 00:09:42.144 8043.055 - 8102.633: 2.4350% ( 143) 00:09:42.144 8102.633 - 8162.211: 3.7992% ( 172) 00:09:42.144 8162.211 - 8221.789: 5.3775% ( 199) 00:09:42.144 8221.789 - 8281.367: 6.9638% ( 200) 00:09:42.144 8281.367 - 8340.945: 8.5819% ( 204) 00:09:42.144 8340.945 - 8400.524: 10.3109% ( 218) 00:09:42.144 8400.524 - 8460.102: 12.0241% ( 216) 00:09:42.144 8460.102 - 8519.680: 13.8325% ( 228) 00:09:42.144 8519.680 - 8579.258: 15.5853% ( 221) 00:09:42.144 8579.258 - 8638.836: 17.4651% ( 237) 00:09:42.144 8638.836 - 8698.415: 19.4162% ( 246) 00:09:42.144 8698.415 - 8757.993: 21.3674% ( 246) 00:09:42.144 8757.993 - 8817.571: 23.5882% ( 280) 00:09:42.145 8817.571 - 8877.149: 26.0945% ( 316) 00:09:42.145 8877.149 - 8936.727: 28.8785% ( 351) 00:09:42.145 8936.727 - 8996.305: 32.0987% ( 406) 00:09:42.145 8996.305 - 9055.884: 35.3902% ( 415) 00:09:42.145 9055.884 - 9115.462: 38.6818% ( 415) 00:09:42.145 9115.462 - 9175.040: 41.5609% ( 363) 00:09:42.145 9175.040 - 9234.618: 44.1862% ( 331) 00:09:42.145 9234.618 - 9294.196: 46.6609% ( 312) 00:09:42.145 9294.196 - 9353.775: 48.8499% ( 276) 00:09:42.145 9353.775 - 9413.353: 51.2452% ( 302) 00:09:42.145 9413.353 - 9472.931: 53.5216% ( 287) 00:09:42.145 9472.931 - 9532.509: 55.7900% ( 286) 00:09:42.145 9532.509 - 9592.087: 58.0029% ( 279) 00:09:42.145 9592.087 - 9651.665: 60.1047% ( 265) 00:09:42.145 9651.665 - 9711.244: 62.2145% ( 266) 00:09:42.145 9711.244 - 9770.822: 64.3084% ( 264) 00:09:42.145 9770.822 - 9830.400: 66.3547% ( 258) 00:09:42.145 9830.400 - 9889.978: 68.4010% ( 258) 00:09:42.145 9889.978 - 9949.556: 70.0984% ( 214) 00:09:42.145 9949.556 - 10009.135: 71.6371% ( 194) 00:09:42.145 10009.135 - 10068.713: 72.9854% ( 170) 00:09:42.145 10068.713 - 10128.291: 74.0562% ( 135) 00:09:42.145 10128.291 - 10187.869: 74.8096% ( 95) 00:09:42.145 10187.869 - 10247.447: 75.4045% ( 75) 00:09:42.145 10247.447 - 10307.025: 75.9201% ( 65) 00:09:42.145 10307.025 - 10366.604: 76.4356% ( 65) 00:09:42.145 10366.604 - 10426.182: 76.9273% ( 62) 00:09:42.145 10426.182 - 10485.760: 77.3715% ( 56) 00:09:42.145 10485.760 - 10545.338: 77.7839% ( 52) 00:09:42.145 10545.338 - 10604.916: 78.1250% ( 43) 00:09:42.145 10604.916 - 10664.495: 78.4740% ( 44) 00:09:42.145 10664.495 - 10724.073: 78.8547% ( 48) 00:09:42.145 10724.073 - 10783.651: 79.2989% ( 56) 00:09:42.145 10783.651 - 10843.229: 79.7192% ( 53) 00:09:42.145 10843.229 - 10902.807: 80.1475% ( 54) 00:09:42.145 10902.807 - 10962.385: 80.5996% ( 57) 00:09:42.145 10962.385 - 11021.964: 81.0914% ( 62) 00:09:42.145 11021.964 - 11081.542: 81.6069% ( 65) 00:09:42.145 11081.542 - 11141.120: 82.1145% ( 64) 00:09:42.145 11141.120 - 11200.698: 82.6856% ( 72) 00:09:42.145 11200.698 - 11260.276: 83.2725% ( 74) 00:09:42.145 11260.276 - 11319.855: 83.8991% ( 79) 00:09:42.145 11319.855 - 11379.433: 84.4940% ( 75) 00:09:42.145 11379.433 - 11439.011: 85.1126% ( 78) 00:09:42.145 11439.011 - 11498.589: 85.6916% ( 73) 00:09:42.145 11498.589 - 11558.167: 86.2548% ( 71) 00:09:42.145 11558.167 - 11617.745: 86.8100% ( 70) 00:09:42.145 11617.745 - 11677.324: 87.3017% ( 62) 00:09:42.145 11677.324 - 11736.902: 87.7379% ( 55) 00:09:42.145 11736.902 - 11796.480: 88.2376% ( 63) 00:09:42.145 11796.480 - 11856.058: 88.6580% ( 53) 00:09:42.145 11856.058 - 11915.636: 89.1101% ( 57) 00:09:42.145 11915.636 - 11975.215: 89.5067% ( 50) 00:09:42.145 11975.215 - 12034.793: 89.8794% ( 47) 00:09:42.145 12034.793 - 12094.371: 90.2602% ( 48) 00:09:42.145 12094.371 - 12153.949: 90.5457% ( 36) 00:09:42.145 12153.949 - 12213.527: 90.8391% ( 37) 00:09:42.145 12213.527 - 12273.105: 91.1168% ( 35) 00:09:42.145 12273.105 - 12332.684: 91.3547% ( 30) 00:09:42.145 12332.684 - 12392.262: 91.5768% ( 28) 00:09:42.145 12392.262 - 12451.840: 91.8306% ( 32) 00:09:42.145 12451.840 - 12511.418: 92.0685% ( 30) 00:09:42.145 12511.418 - 12570.996: 92.3144% ( 31) 00:09:42.145 12570.996 - 12630.575: 92.5761% ( 33) 00:09:42.145 12630.575 - 12690.153: 92.8141% ( 30) 00:09:42.145 12690.153 - 12749.731: 93.0679% ( 32) 00:09:42.145 12749.731 - 12809.309: 93.2741% ( 26) 00:09:42.145 12809.309 - 12868.887: 93.4407% ( 21) 00:09:42.145 12868.887 - 12928.465: 93.6152% ( 22) 00:09:42.145 12928.465 - 12988.044: 93.7183% ( 13) 00:09:42.145 12988.044 - 13047.622: 93.8214% ( 13) 00:09:42.145 13047.622 - 13107.200: 93.8928% ( 9) 00:09:42.145 13107.200 - 13166.778: 93.9641% ( 9) 00:09:42.145 13166.778 - 13226.356: 94.0355% ( 9) 00:09:42.145 13226.356 - 13285.935: 94.0752% ( 5) 00:09:42.145 13285.935 - 13345.513: 94.1069% ( 4) 00:09:42.145 13345.513 - 13405.091: 94.1386% ( 4) 00:09:42.145 13405.091 - 13464.669: 94.1624% ( 3) 00:09:42.145 13464.669 - 13524.247: 94.1942% ( 4) 00:09:42.145 13524.247 - 13583.825: 94.2180% ( 3) 00:09:42.145 13583.825 - 13643.404: 94.2418% ( 3) 00:09:42.145 13643.404 - 13702.982: 94.2735% ( 4) 00:09:42.145 13702.982 - 13762.560: 94.3052% ( 4) 00:09:42.145 13762.560 - 13822.138: 94.3369% ( 4) 00:09:42.145 13822.138 - 13881.716: 94.3687% ( 4) 00:09:42.145 13881.716 - 13941.295: 94.4242% ( 7) 00:09:42.145 13941.295 - 14000.873: 94.4718% ( 6) 00:09:42.145 14000.873 - 14060.451: 94.5273% ( 7) 00:09:42.145 14060.451 - 14120.029: 94.5749% ( 6) 00:09:42.145 14120.029 - 14179.607: 94.6304% ( 7) 00:09:42.145 14179.607 - 14239.185: 94.6780% ( 6) 00:09:42.145 14239.185 - 14298.764: 94.7335% ( 7) 00:09:42.145 14298.764 - 14358.342: 94.7890% ( 7) 00:09:42.145 14358.342 - 14417.920: 94.8366% ( 6) 00:09:42.145 14417.920 - 14477.498: 94.8842% ( 6) 00:09:42.145 14477.498 - 14537.076: 94.9397% ( 7) 00:09:42.145 14537.076 - 14596.655: 94.9873% ( 6) 00:09:42.145 14596.655 - 14656.233: 95.0349% ( 6) 00:09:42.145 14656.233 - 14715.811: 95.0666% ( 4) 00:09:42.145 14715.811 - 14775.389: 95.1063% ( 5) 00:09:42.145 14775.389 - 14834.967: 95.1380% ( 4) 00:09:42.145 14834.967 - 14894.545: 95.1697% ( 4) 00:09:42.145 14894.545 - 14954.124: 95.2094% ( 5) 00:09:42.145 14954.124 - 15013.702: 95.2411% ( 4) 00:09:42.145 15013.702 - 15073.280: 95.2808% ( 5) 00:09:42.145 15073.280 - 15132.858: 95.3284% ( 6) 00:09:42.145 15132.858 - 15192.436: 95.3760% ( 6) 00:09:42.145 15192.436 - 15252.015: 95.4394% ( 8) 00:09:42.145 15252.015 - 15371.171: 95.5742% ( 17) 00:09:42.145 15371.171 - 15490.327: 95.7091% ( 17) 00:09:42.145 15490.327 - 15609.484: 95.8360% ( 16) 00:09:42.145 15609.484 - 15728.640: 96.0025% ( 21) 00:09:42.145 15728.640 - 15847.796: 96.2088% ( 26) 00:09:42.145 15847.796 - 15966.953: 96.3991% ( 24) 00:09:42.145 15966.953 - 16086.109: 96.5815% ( 23) 00:09:42.145 16086.109 - 16205.265: 96.7878% ( 26) 00:09:42.145 16205.265 - 16324.422: 97.0098% ( 28) 00:09:42.145 16324.422 - 16443.578: 97.2081% ( 25) 00:09:42.145 16443.578 - 16562.735: 97.3668% ( 20) 00:09:42.145 16562.735 - 16681.891: 97.5412% ( 22) 00:09:42.145 16681.891 - 16801.047: 97.6999% ( 20) 00:09:42.145 16801.047 - 16920.204: 97.8426% ( 18) 00:09:42.145 16920.204 - 17039.360: 97.9616% ( 15) 00:09:42.145 17039.360 - 17158.516: 98.0964% ( 17) 00:09:42.145 17158.516 - 17277.673: 98.2154% ( 15) 00:09:42.145 17277.673 - 17396.829: 98.3503% ( 17) 00:09:42.145 17396.829 - 17515.985: 98.4692% ( 15) 00:09:42.145 17515.985 - 17635.142: 98.5644% ( 12) 00:09:42.145 17635.142 - 17754.298: 98.6675% ( 13) 00:09:42.145 17754.298 - 17873.455: 98.7627% ( 12) 00:09:42.145 17873.455 - 17992.611: 98.8341% ( 9) 00:09:42.145 17992.611 - 18111.767: 98.8975% ( 8) 00:09:42.145 18111.767 - 18230.924: 98.9689% ( 9) 00:09:42.145 18230.924 - 18350.080: 98.9848% ( 2) 00:09:42.145 33602.095 - 33840.407: 99.0244% ( 5) 00:09:42.145 33840.407 - 34078.720: 99.0641% ( 5) 00:09:42.145 34078.720 - 34317.033: 99.1196% ( 7) 00:09:42.145 34317.033 - 34555.345: 99.1513% ( 4) 00:09:42.145 34555.345 - 34793.658: 99.2069% ( 7) 00:09:42.145 34793.658 - 35031.971: 99.2703% ( 8) 00:09:42.145 35031.971 - 35270.284: 99.3179% ( 6) 00:09:42.145 35270.284 - 35508.596: 99.3734% ( 7) 00:09:42.145 35508.596 - 35746.909: 99.4369% ( 8) 00:09:42.145 35746.909 - 35985.222: 99.4924% ( 7) 00:09:42.145 40751.476 - 40989.789: 99.5400% ( 6) 00:09:42.145 40989.789 - 41228.102: 99.5876% ( 6) 00:09:42.145 41228.102 - 41466.415: 99.6431% ( 7) 00:09:42.145 41466.415 - 41704.727: 99.6986% ( 7) 00:09:42.145 41704.727 - 41943.040: 99.7462% ( 6) 00:09:42.145 41943.040 - 42181.353: 99.8096% ( 8) 00:09:42.145 42181.353 - 42419.665: 99.8731% ( 8) 00:09:42.145 42419.665 - 42657.978: 99.9207% ( 6) 00:09:42.145 42657.978 - 42896.291: 99.9841% ( 8) 00:09:42.145 42896.291 - 43134.604: 100.0000% ( 2) 00:09:42.145 00:09:42.145 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:42.145 ============================================================================== 00:09:42.145 Range in us Cumulative IO count 00:09:42.145 7804.742 - 7864.320: 0.0079% ( 1) 00:09:42.145 7864.320 - 7923.898: 0.1578% ( 19) 00:09:42.145 7923.898 - 7983.476: 0.5051% ( 44) 00:09:42.145 7983.476 - 8043.055: 1.3021% ( 101) 00:09:42.145 8043.055 - 8102.633: 2.4384% ( 144) 00:09:42.145 8102.633 - 8162.211: 3.7800% ( 170) 00:09:42.145 8162.211 - 8221.789: 5.1926% ( 179) 00:09:42.145 8221.789 - 8281.367: 6.7866% ( 202) 00:09:42.145 8281.367 - 8340.945: 8.4359% ( 209) 00:09:42.145 8340.945 - 8400.524: 10.1484% ( 217) 00:09:42.145 8400.524 - 8460.102: 11.9003% ( 222) 00:09:42.145 8460.102 - 8519.680: 13.6995% ( 228) 00:09:42.145 8519.680 - 8579.258: 15.4908% ( 227) 00:09:42.145 8579.258 - 8638.836: 17.4085% ( 243) 00:09:42.145 8638.836 - 8698.415: 19.2629% ( 235) 00:09:42.145 8698.415 - 8757.993: 21.2989% ( 258) 00:09:42.145 8757.993 - 8817.571: 23.4533% ( 273) 00:09:42.145 8817.571 - 8877.149: 25.8996% ( 310) 00:09:42.145 8877.149 - 8936.727: 28.6616% ( 350) 00:09:42.145 8936.727 - 8996.305: 31.8261% ( 401) 00:09:42.145 8996.305 - 9055.884: 35.1168% ( 417) 00:09:42.145 9055.884 - 9115.462: 38.2023% ( 391) 00:09:42.145 9115.462 - 9175.040: 41.0038% ( 355) 00:09:42.145 9175.040 - 9234.618: 43.4975% ( 316) 00:09:42.145 9234.618 - 9294.196: 45.7939% ( 291) 00:09:42.145 9294.196 - 9353.775: 48.1218% ( 295) 00:09:42.145 9353.775 - 9413.353: 50.4577% ( 296) 00:09:42.145 9413.353 - 9472.931: 52.7383% ( 289) 00:09:42.145 9472.931 - 9532.509: 54.9716% ( 283) 00:09:42.145 9532.509 - 9592.087: 57.1496% ( 276) 00:09:42.145 9592.087 - 9651.665: 59.2724% ( 269) 00:09:42.145 9651.665 - 9711.244: 61.4268% ( 273) 00:09:42.145 9711.244 - 9770.822: 63.6206% ( 278) 00:09:42.146 9770.822 - 9830.400: 65.7355% ( 268) 00:09:42.146 9830.400 - 9889.978: 67.7399% ( 254) 00:09:42.146 9889.978 - 9949.556: 69.5628% ( 231) 00:09:42.146 9949.556 - 10009.135: 71.2595% ( 215) 00:09:42.146 10009.135 - 10068.713: 72.5694% ( 166) 00:09:42.146 10068.713 - 10128.291: 73.5480% ( 124) 00:09:42.146 10128.291 - 10187.869: 74.2898% ( 94) 00:09:42.146 10187.869 - 10247.447: 74.8816% ( 75) 00:09:42.146 10247.447 - 10307.025: 75.4340% ( 70) 00:09:42.146 10307.025 - 10366.604: 75.9470% ( 65) 00:09:42.146 10366.604 - 10426.182: 76.4205% ( 60) 00:09:42.146 10426.182 - 10485.760: 76.8545% ( 55) 00:09:42.146 10485.760 - 10545.338: 77.2964% ( 56) 00:09:42.146 10545.338 - 10604.916: 77.6673% ( 47) 00:09:42.146 10604.916 - 10664.495: 78.0934% ( 54) 00:09:42.146 10664.495 - 10724.073: 78.4959% ( 51) 00:09:42.146 10724.073 - 10783.651: 78.9378% ( 56) 00:09:42.146 10783.651 - 10843.229: 79.4429% ( 64) 00:09:42.146 10843.229 - 10902.807: 79.9321% ( 62) 00:09:42.146 10902.807 - 10962.385: 80.4372% ( 64) 00:09:42.146 10962.385 - 11021.964: 80.8949% ( 58) 00:09:42.146 11021.964 - 11081.542: 81.3999% ( 64) 00:09:42.146 11081.542 - 11141.120: 81.9129% ( 65) 00:09:42.146 11141.120 - 11200.698: 82.5047% ( 75) 00:09:42.146 11200.698 - 11260.276: 83.0966% ( 75) 00:09:42.146 11260.276 - 11319.855: 83.7042% ( 77) 00:09:42.146 11319.855 - 11379.433: 84.2566% ( 70) 00:09:42.146 11379.433 - 11439.011: 84.8090% ( 70) 00:09:42.146 11439.011 - 11498.589: 85.3456% ( 68) 00:09:42.146 11498.589 - 11558.167: 85.8980% ( 70) 00:09:42.146 11558.167 - 11617.745: 86.4504% ( 70) 00:09:42.146 11617.745 - 11677.324: 87.0423% ( 75) 00:09:42.146 11677.324 - 11736.902: 87.6342% ( 75) 00:09:42.146 11736.902 - 11796.480: 88.1944% ( 71) 00:09:42.146 11796.480 - 11856.058: 88.7232% ( 67) 00:09:42.146 11856.058 - 11915.636: 89.1888% ( 59) 00:09:42.146 11915.636 - 11975.215: 89.6386% ( 57) 00:09:42.146 11975.215 - 12034.793: 90.0963% ( 58) 00:09:42.146 12034.793 - 12094.371: 90.4908% ( 50) 00:09:42.146 12094.371 - 12153.949: 90.8696% ( 48) 00:09:42.146 12153.949 - 12213.527: 91.1853% ( 40) 00:09:42.146 12213.527 - 12273.105: 91.4852% ( 38) 00:09:42.146 12273.105 - 12332.684: 91.7456% ( 33) 00:09:42.146 12332.684 - 12392.262: 92.0060% ( 33) 00:09:42.146 12392.262 - 12451.840: 92.2585% ( 32) 00:09:42.146 12451.840 - 12511.418: 92.5268% ( 34) 00:09:42.146 12511.418 - 12570.996: 92.8030% ( 35) 00:09:42.146 12570.996 - 12630.575: 93.0556% ( 32) 00:09:42.146 12630.575 - 12690.153: 93.2607% ( 26) 00:09:42.146 12690.153 - 12749.731: 93.4659% ( 26) 00:09:42.146 12749.731 - 12809.309: 93.6553% ( 24) 00:09:42.146 12809.309 - 12868.887: 93.8210% ( 21) 00:09:42.146 12868.887 - 12928.465: 93.9236% ( 13) 00:09:42.146 12928.465 - 12988.044: 94.0104% ( 11) 00:09:42.146 12988.044 - 13047.622: 94.0814% ( 9) 00:09:42.146 13047.622 - 13107.200: 94.1525% ( 9) 00:09:42.146 13107.200 - 13166.778: 94.2077% ( 7) 00:09:42.146 13166.778 - 13226.356: 94.2551% ( 6) 00:09:42.146 13226.356 - 13285.935: 94.2866% ( 4) 00:09:42.146 13285.935 - 13345.513: 94.3024% ( 2) 00:09:42.146 13345.513 - 13405.091: 94.3182% ( 2) 00:09:42.146 13405.091 - 13464.669: 94.3340% ( 2) 00:09:42.146 13464.669 - 13524.247: 94.3497% ( 2) 00:09:42.146 13524.247 - 13583.825: 94.3576% ( 1) 00:09:42.146 13583.825 - 13643.404: 94.3734% ( 2) 00:09:42.146 13643.404 - 13702.982: 94.3971% ( 3) 00:09:42.146 13702.982 - 13762.560: 94.4366% ( 5) 00:09:42.146 13762.560 - 13822.138: 94.4760% ( 5) 00:09:42.146 13822.138 - 13881.716: 94.5234% ( 6) 00:09:42.146 13881.716 - 13941.295: 94.5707% ( 6) 00:09:42.146 13941.295 - 14000.873: 94.6023% ( 4) 00:09:42.146 14000.873 - 14060.451: 94.6417% ( 5) 00:09:42.146 14060.451 - 14120.029: 94.6812% ( 5) 00:09:42.146 14120.029 - 14179.607: 94.7049% ( 3) 00:09:42.146 14179.607 - 14239.185: 94.7443% ( 5) 00:09:42.146 14239.185 - 14298.764: 94.7759% ( 4) 00:09:42.146 14298.764 - 14358.342: 94.8153% ( 5) 00:09:42.146 14358.342 - 14417.920: 94.8469% ( 4) 00:09:42.146 14417.920 - 14477.498: 94.8785% ( 4) 00:09:42.146 14477.498 - 14537.076: 94.9179% ( 5) 00:09:42.146 14537.076 - 14596.655: 94.9495% ( 4) 00:09:42.146 14596.655 - 14656.233: 94.9811% ( 4) 00:09:42.146 14656.233 - 14715.811: 95.0205% ( 5) 00:09:42.146 14715.811 - 14775.389: 95.0600% ( 5) 00:09:42.146 14775.389 - 14834.967: 95.0915% ( 4) 00:09:42.146 14834.967 - 14894.545: 95.1468% ( 7) 00:09:42.146 14894.545 - 14954.124: 95.1941% ( 6) 00:09:42.146 14954.124 - 15013.702: 95.2336% ( 5) 00:09:42.146 15013.702 - 15073.280: 95.2888% ( 7) 00:09:42.146 15073.280 - 15132.858: 95.3125% ( 3) 00:09:42.146 15132.858 - 15192.436: 95.3441% ( 4) 00:09:42.146 15192.436 - 15252.015: 95.3756% ( 4) 00:09:42.146 15252.015 - 15371.171: 95.4545% ( 10) 00:09:42.146 15371.171 - 15490.327: 95.5808% ( 16) 00:09:42.146 15490.327 - 15609.484: 95.6992% ( 15) 00:09:42.146 15609.484 - 15728.640: 95.9122% ( 27) 00:09:42.146 15728.640 - 15847.796: 96.1095% ( 25) 00:09:42.146 15847.796 - 15966.953: 96.3305% ( 28) 00:09:42.146 15966.953 - 16086.109: 96.5436% ( 27) 00:09:42.146 16086.109 - 16205.265: 96.7566% ( 27) 00:09:42.146 16205.265 - 16324.422: 96.9697% ( 27) 00:09:42.146 16324.422 - 16443.578: 97.1828% ( 27) 00:09:42.146 16443.578 - 16562.735: 97.3879% ( 26) 00:09:42.146 16562.735 - 16681.891: 97.5852% ( 25) 00:09:42.146 16681.891 - 16801.047: 97.7509% ( 21) 00:09:42.146 16801.047 - 16920.204: 97.8851% ( 17) 00:09:42.146 16920.204 - 17039.360: 98.0035% ( 15) 00:09:42.146 17039.360 - 17158.516: 98.1140% ( 14) 00:09:42.146 17158.516 - 17277.673: 98.2086% ( 12) 00:09:42.146 17277.673 - 17396.829: 98.3033% ( 12) 00:09:42.146 17396.829 - 17515.985: 98.3902% ( 11) 00:09:42.146 17515.985 - 17635.142: 98.4927% ( 13) 00:09:42.146 17635.142 - 17754.298: 98.5874% ( 12) 00:09:42.146 17754.298 - 17873.455: 98.6585% ( 9) 00:09:42.146 17873.455 - 17992.611: 98.7216% ( 8) 00:09:42.146 17992.611 - 18111.767: 98.7926% ( 9) 00:09:42.146 18111.767 - 18230.924: 98.8557% ( 8) 00:09:42.146 18230.924 - 18350.080: 98.9110% ( 7) 00:09:42.146 18350.080 - 18469.236: 98.9347% ( 3) 00:09:42.146 18469.236 - 18588.393: 98.9662% ( 4) 00:09:42.146 18588.393 - 18707.549: 98.9899% ( 3) 00:09:42.146 24307.898 - 24427.055: 98.9978% ( 1) 00:09:42.146 24427.055 - 24546.211: 99.0136% ( 2) 00:09:42.146 24546.211 - 24665.367: 99.0372% ( 3) 00:09:42.146 24665.367 - 24784.524: 99.0530% ( 2) 00:09:42.146 24784.524 - 24903.680: 99.0688% ( 2) 00:09:42.146 24903.680 - 25022.836: 99.0846% ( 2) 00:09:42.146 25022.836 - 25141.993: 99.1083% ( 3) 00:09:42.146 25141.993 - 25261.149: 99.1319% ( 3) 00:09:42.146 25261.149 - 25380.305: 99.1477% ( 2) 00:09:42.146 25380.305 - 25499.462: 99.1714% ( 3) 00:09:42.146 25499.462 - 25618.618: 99.1872% ( 2) 00:09:42.146 25618.618 - 25737.775: 99.2030% ( 2) 00:09:42.146 25737.775 - 25856.931: 99.2266% ( 3) 00:09:42.146 25856.931 - 25976.087: 99.2503% ( 3) 00:09:42.146 25976.087 - 26095.244: 99.2740% ( 3) 00:09:42.146 26095.244 - 26214.400: 99.2898% ( 2) 00:09:42.146 26214.400 - 26333.556: 99.3134% ( 3) 00:09:42.146 26333.556 - 26452.713: 99.3371% ( 3) 00:09:42.146 26452.713 - 26571.869: 99.3608% ( 3) 00:09:42.146 26571.869 - 26691.025: 99.3766% ( 2) 00:09:42.146 26691.025 - 26810.182: 99.4003% ( 3) 00:09:42.146 26810.182 - 26929.338: 99.4160% ( 2) 00:09:42.146 26929.338 - 27048.495: 99.4397% ( 3) 00:09:42.146 27048.495 - 27167.651: 99.4634% ( 3) 00:09:42.146 27167.651 - 27286.807: 99.4792% ( 2) 00:09:42.146 27286.807 - 27405.964: 99.4949% ( 2) 00:09:42.146 32648.844 - 32887.156: 99.5265% ( 4) 00:09:42.146 32887.156 - 33125.469: 99.5660% ( 5) 00:09:42.146 33125.469 - 33363.782: 99.6212% ( 7) 00:09:42.146 33363.782 - 33602.095: 99.6607% ( 5) 00:09:42.146 33602.095 - 33840.407: 99.7080% ( 6) 00:09:42.146 33840.407 - 34078.720: 99.7554% ( 6) 00:09:42.146 34078.720 - 34317.033: 99.7948% ( 5) 00:09:42.146 34317.033 - 34555.345: 99.8422% ( 6) 00:09:42.146 34555.345 - 34793.658: 99.8895% ( 6) 00:09:42.146 34793.658 - 35031.971: 99.9448% ( 7) 00:09:42.146 35031.971 - 35270.284: 100.0000% ( 7) 00:09:42.146 00:09:42.146 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:42.146 ============================================================================== 00:09:42.146 Range in us Cumulative IO count 00:09:42.146 7804.742 - 7864.320: 0.0158% ( 2) 00:09:42.146 7864.320 - 7923.898: 0.1420% ( 16) 00:09:42.146 7923.898 - 7983.476: 0.5051% ( 46) 00:09:42.146 7983.476 - 8043.055: 1.2705% ( 97) 00:09:42.146 8043.055 - 8102.633: 2.4542% ( 150) 00:09:42.146 8102.633 - 8162.211: 3.7800% ( 168) 00:09:42.146 8162.211 - 8221.789: 5.2715% ( 189) 00:09:42.146 8221.789 - 8281.367: 6.8497% ( 200) 00:09:42.146 8281.367 - 8340.945: 8.5938% ( 221) 00:09:42.146 8340.945 - 8400.524: 10.2352% ( 208) 00:09:42.146 8400.524 - 8460.102: 11.9397% ( 216) 00:09:42.146 8460.102 - 8519.680: 13.7153% ( 225) 00:09:42.146 8519.680 - 8579.258: 15.4830% ( 224) 00:09:42.146 8579.258 - 8638.836: 17.3611% ( 238) 00:09:42.146 8638.836 - 8698.415: 19.3340% ( 250) 00:09:42.146 8698.415 - 8757.993: 21.3621% ( 257) 00:09:42.146 8757.993 - 8817.571: 23.6190% ( 286) 00:09:42.146 8817.571 - 8877.149: 26.0496% ( 308) 00:09:42.146 8877.149 - 8936.727: 28.8194% ( 351) 00:09:42.146 8936.727 - 8996.305: 31.9444% ( 396) 00:09:42.146 8996.305 - 9055.884: 35.1641% ( 408) 00:09:42.146 9055.884 - 9115.462: 38.3365% ( 402) 00:09:42.146 9115.462 - 9175.040: 41.1143% ( 352) 00:09:42.146 9175.040 - 9234.618: 43.6158% ( 317) 00:09:42.146 9234.618 - 9294.196: 45.8965% ( 289) 00:09:42.146 9294.196 - 9353.775: 48.1929% ( 291) 00:09:42.146 9353.775 - 9413.353: 50.4419% ( 285) 00:09:42.146 9413.353 - 9472.931: 52.5805% ( 271) 00:09:42.147 9472.931 - 9532.509: 54.8690% ( 290) 00:09:42.147 9532.509 - 9592.087: 57.0786% ( 280) 00:09:42.147 9592.087 - 9651.665: 59.3198% ( 284) 00:09:42.147 9651.665 - 9711.244: 61.5294% ( 280) 00:09:42.147 9711.244 - 9770.822: 63.6837% ( 273) 00:09:42.147 9770.822 - 9830.400: 65.7197% ( 258) 00:09:42.147 9830.400 - 9889.978: 67.6294% ( 242) 00:09:42.147 9889.978 - 9949.556: 69.5234% ( 240) 00:09:42.147 9949.556 - 10009.135: 71.1884% ( 211) 00:09:42.147 10009.135 - 10068.713: 72.4826% ( 164) 00:09:42.147 10068.713 - 10128.291: 73.4533% ( 123) 00:09:42.147 10128.291 - 10187.869: 74.2424% ( 100) 00:09:42.147 10187.869 - 10247.447: 74.8106% ( 72) 00:09:42.147 10247.447 - 10307.025: 75.3472% ( 68) 00:09:42.147 10307.025 - 10366.604: 75.8523% ( 64) 00:09:42.147 10366.604 - 10426.182: 76.3494% ( 63) 00:09:42.147 10426.182 - 10485.760: 76.7440% ( 50) 00:09:42.147 10485.760 - 10545.338: 77.1149% ( 47) 00:09:42.147 10545.338 - 10604.916: 77.5016% ( 49) 00:09:42.147 10604.916 - 10664.495: 77.9040% ( 51) 00:09:42.147 10664.495 - 10724.073: 78.3539% ( 57) 00:09:42.147 10724.073 - 10783.651: 78.8194% ( 59) 00:09:42.147 10783.651 - 10843.229: 79.2850% ( 59) 00:09:42.147 10843.229 - 10902.807: 79.8059% ( 66) 00:09:42.147 10902.807 - 10962.385: 80.3188% ( 65) 00:09:42.147 10962.385 - 11021.964: 80.8712% ( 70) 00:09:42.147 11021.964 - 11081.542: 81.4710% ( 76) 00:09:42.147 11081.542 - 11141.120: 82.0549% ( 74) 00:09:42.147 11141.120 - 11200.698: 82.5994% ( 69) 00:09:42.147 11200.698 - 11260.276: 83.1597% ( 71) 00:09:42.147 11260.276 - 11319.855: 83.6648% ( 64) 00:09:42.147 11319.855 - 11379.433: 84.2566% ( 75) 00:09:42.147 11379.433 - 11439.011: 84.7932% ( 68) 00:09:42.147 11439.011 - 11498.589: 85.3930% ( 76) 00:09:42.147 11498.589 - 11558.167: 85.9770% ( 74) 00:09:42.147 11558.167 - 11617.745: 86.5609% ( 74) 00:09:42.147 11617.745 - 11677.324: 87.1765% ( 78) 00:09:42.147 11677.324 - 11736.902: 87.7762% ( 76) 00:09:42.147 11736.902 - 11796.480: 88.3128% ( 68) 00:09:42.147 11796.480 - 11856.058: 88.7784% ( 59) 00:09:42.147 11856.058 - 11915.636: 89.2440% ( 59) 00:09:42.147 11915.636 - 11975.215: 89.6701% ( 54) 00:09:42.147 11975.215 - 12034.793: 90.0805% ( 52) 00:09:42.147 12034.793 - 12094.371: 90.4435% ( 46) 00:09:42.147 12094.371 - 12153.949: 90.7907% ( 44) 00:09:42.147 12153.949 - 12213.527: 91.0827% ( 37) 00:09:42.147 12213.527 - 12273.105: 91.3589% ( 35) 00:09:42.147 12273.105 - 12332.684: 91.6509% ( 37) 00:09:42.147 12332.684 - 12392.262: 91.9271% ( 35) 00:09:42.147 12392.262 - 12451.840: 92.1717% ( 31) 00:09:42.147 12451.840 - 12511.418: 92.4164% ( 31) 00:09:42.147 12511.418 - 12570.996: 92.6373% ( 28) 00:09:42.147 12570.996 - 12630.575: 92.8504% ( 27) 00:09:42.147 12630.575 - 12690.153: 93.0240% ( 22) 00:09:42.147 12690.153 - 12749.731: 93.2055% ( 23) 00:09:42.147 12749.731 - 12809.309: 93.3633% ( 20) 00:09:42.147 12809.309 - 12868.887: 93.5290% ( 21) 00:09:42.147 12868.887 - 12928.465: 93.6632% ( 17) 00:09:42.147 12928.465 - 12988.044: 93.7816% ( 15) 00:09:42.147 12988.044 - 13047.622: 93.8842% ( 13) 00:09:42.147 13047.622 - 13107.200: 93.9552% ( 9) 00:09:42.147 13107.200 - 13166.778: 94.0183% ( 8) 00:09:42.147 13166.778 - 13226.356: 94.0657% ( 6) 00:09:42.147 13226.356 - 13285.935: 94.1288% ( 8) 00:09:42.147 13285.935 - 13345.513: 94.1682% ( 5) 00:09:42.147 13345.513 - 13405.091: 94.2314% ( 8) 00:09:42.147 13405.091 - 13464.669: 94.2866% ( 7) 00:09:42.147 13464.669 - 13524.247: 94.3655% ( 10) 00:09:42.147 13524.247 - 13583.825: 94.4366% ( 9) 00:09:42.147 13583.825 - 13643.404: 94.4760% ( 5) 00:09:42.147 13643.404 - 13702.982: 94.5076% ( 4) 00:09:42.147 13702.982 - 13762.560: 94.5470% ( 5) 00:09:42.147 13762.560 - 13822.138: 94.5707% ( 3) 00:09:42.147 13822.138 - 13881.716: 94.5944% ( 3) 00:09:42.147 13881.716 - 13941.295: 94.6181% ( 3) 00:09:42.147 13941.295 - 14000.873: 94.6417% ( 3) 00:09:42.147 14000.873 - 14060.451: 94.6654% ( 3) 00:09:42.147 14060.451 - 14120.029: 94.6812% ( 2) 00:09:42.147 14120.029 - 14179.607: 94.7049% ( 3) 00:09:42.147 14179.607 - 14239.185: 94.7285% ( 3) 00:09:42.147 14239.185 - 14298.764: 94.7522% ( 3) 00:09:42.147 14298.764 - 14358.342: 94.7759% ( 3) 00:09:42.147 14358.342 - 14417.920: 94.8153% ( 5) 00:09:42.147 14417.920 - 14477.498: 94.8469% ( 4) 00:09:42.147 14477.498 - 14537.076: 94.8864% ( 5) 00:09:42.147 14537.076 - 14596.655: 94.9416% ( 7) 00:09:42.147 14596.655 - 14656.233: 94.9811% ( 5) 00:09:42.147 14656.233 - 14715.811: 95.0363% ( 7) 00:09:42.147 14715.811 - 14775.389: 95.0758% ( 5) 00:09:42.147 14775.389 - 14834.967: 95.1231% ( 6) 00:09:42.147 14834.967 - 14894.545: 95.1626% ( 5) 00:09:42.147 14894.545 - 14954.124: 95.1862% ( 3) 00:09:42.147 14954.124 - 15013.702: 95.2099% ( 3) 00:09:42.147 15013.702 - 15073.280: 95.2336% ( 3) 00:09:42.147 15073.280 - 15132.858: 95.2652% ( 4) 00:09:42.147 15132.858 - 15192.436: 95.2888% ( 3) 00:09:42.147 15192.436 - 15252.015: 95.3204% ( 4) 00:09:42.147 15252.015 - 15371.171: 95.3914% ( 9) 00:09:42.147 15371.171 - 15490.327: 95.5098% ( 15) 00:09:42.147 15490.327 - 15609.484: 95.6518% ( 18) 00:09:42.147 15609.484 - 15728.640: 95.8412% ( 24) 00:09:42.147 15728.640 - 15847.796: 96.0306% ( 24) 00:09:42.147 15847.796 - 15966.953: 96.2200% ( 24) 00:09:42.147 15966.953 - 16086.109: 96.4568% ( 30) 00:09:42.147 16086.109 - 16205.265: 96.7172% ( 33) 00:09:42.147 16205.265 - 16324.422: 96.9460% ( 29) 00:09:42.147 16324.422 - 16443.578: 97.2064% ( 33) 00:09:42.147 16443.578 - 16562.735: 97.4353% ( 29) 00:09:42.147 16562.735 - 16681.891: 97.6247% ( 24) 00:09:42.147 16681.891 - 16801.047: 97.7825% ( 20) 00:09:42.147 16801.047 - 16920.204: 97.9403% ( 20) 00:09:42.147 16920.204 - 17039.360: 98.0429% ( 13) 00:09:42.147 17039.360 - 17158.516: 98.1455% ( 13) 00:09:42.147 17158.516 - 17277.673: 98.2402% ( 12) 00:09:42.147 17277.673 - 17396.829: 98.3270% ( 11) 00:09:42.147 17396.829 - 17515.985: 98.4217% ( 12) 00:09:42.147 17515.985 - 17635.142: 98.5164% ( 12) 00:09:42.147 17635.142 - 17754.298: 98.6190% ( 13) 00:09:42.147 17754.298 - 17873.455: 98.7137% ( 12) 00:09:42.147 17873.455 - 17992.611: 98.7926% ( 10) 00:09:42.147 17992.611 - 18111.767: 98.8479% ( 7) 00:09:42.147 18111.767 - 18230.924: 98.9031% ( 7) 00:09:42.147 18230.924 - 18350.080: 98.9268% ( 3) 00:09:42.147 18350.080 - 18469.236: 98.9504% ( 3) 00:09:42.147 18469.236 - 18588.393: 98.9662% ( 2) 00:09:42.147 18588.393 - 18707.549: 98.9899% ( 3) 00:09:42.147 20614.051 - 20733.207: 98.9978% ( 1) 00:09:42.147 20733.207 - 20852.364: 99.0136% ( 2) 00:09:42.147 20852.364 - 20971.520: 99.0372% ( 3) 00:09:42.147 20971.520 - 21090.676: 99.0530% ( 2) 00:09:42.147 21090.676 - 21209.833: 99.0688% ( 2) 00:09:42.147 21209.833 - 21328.989: 99.0925% ( 3) 00:09:42.147 21328.989 - 21448.145: 99.1162% ( 3) 00:09:42.147 21448.145 - 21567.302: 99.1319% ( 2) 00:09:42.147 21567.302 - 21686.458: 99.1556% ( 3) 00:09:42.147 21686.458 - 21805.615: 99.1714% ( 2) 00:09:42.147 21805.615 - 21924.771: 99.1872% ( 2) 00:09:42.147 21924.771 - 22043.927: 99.2109% ( 3) 00:09:42.147 22043.927 - 22163.084: 99.2266% ( 2) 00:09:42.147 22163.084 - 22282.240: 99.2503% ( 3) 00:09:42.147 22282.240 - 22401.396: 99.2661% ( 2) 00:09:42.147 22401.396 - 22520.553: 99.2898% ( 3) 00:09:42.147 22520.553 - 22639.709: 99.3056% ( 2) 00:09:42.147 22639.709 - 22758.865: 99.3213% ( 2) 00:09:42.147 22758.865 - 22878.022: 99.3450% ( 3) 00:09:42.147 22878.022 - 22997.178: 99.3608% ( 2) 00:09:42.147 22997.178 - 23116.335: 99.3845% ( 3) 00:09:42.147 23116.335 - 23235.491: 99.4003% ( 2) 00:09:42.147 23235.491 - 23354.647: 99.4239% ( 3) 00:09:42.147 23354.647 - 23473.804: 99.4397% ( 2) 00:09:42.147 23473.804 - 23592.960: 99.4555% ( 2) 00:09:42.147 23592.960 - 23712.116: 99.4792% ( 3) 00:09:42.147 23712.116 - 23831.273: 99.4949% ( 2) 00:09:42.147 28954.996 - 29074.153: 99.5186% ( 3) 00:09:42.147 29074.153 - 29193.309: 99.5344% ( 2) 00:09:42.147 29193.309 - 29312.465: 99.5581% ( 3) 00:09:42.147 29312.465 - 29431.622: 99.5739% ( 2) 00:09:42.147 29431.622 - 29550.778: 99.5975% ( 3) 00:09:42.147 29550.778 - 29669.935: 99.6133% ( 2) 00:09:42.147 29669.935 - 29789.091: 99.6291% ( 2) 00:09:42.147 29789.091 - 29908.247: 99.6449% ( 2) 00:09:42.147 29908.247 - 30027.404: 99.6686% ( 3) 00:09:42.147 30027.404 - 30146.560: 99.6843% ( 2) 00:09:42.147 30146.560 - 30265.716: 99.7080% ( 3) 00:09:42.147 30265.716 - 30384.873: 99.7238% ( 2) 00:09:42.147 30384.873 - 30504.029: 99.7475% ( 3) 00:09:42.148 30504.029 - 30742.342: 99.7790% ( 4) 00:09:42.148 30742.342 - 30980.655: 99.8185% ( 5) 00:09:42.148 30980.655 - 31218.967: 99.8580% ( 5) 00:09:42.148 31218.967 - 31457.280: 99.8974% ( 5) 00:09:42.148 31457.280 - 31695.593: 99.9290% ( 4) 00:09:42.148 31695.593 - 31933.905: 99.9763% ( 6) 00:09:42.148 31933.905 - 32172.218: 100.0000% ( 3) 00:09:42.148 00:09:42.148 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:42.148 ============================================================================== 00:09:42.148 Range in us Cumulative IO count 00:09:42.148 7745.164 - 7804.742: 0.0316% ( 4) 00:09:42.148 7804.742 - 7864.320: 0.1184% ( 11) 00:09:42.148 7864.320 - 7923.898: 0.3078% ( 24) 00:09:42.148 7923.898 - 7983.476: 0.6550% ( 44) 00:09:42.148 7983.476 - 8043.055: 1.4441% ( 100) 00:09:42.148 8043.055 - 8102.633: 2.5174% ( 136) 00:09:42.148 8102.633 - 8162.211: 3.7642% ( 158) 00:09:42.148 8162.211 - 8221.789: 5.3425% ( 200) 00:09:42.148 8221.789 - 8281.367: 7.0391% ( 215) 00:09:42.148 8281.367 - 8340.945: 8.6963% ( 210) 00:09:42.148 8340.945 - 8400.524: 10.3772% ( 213) 00:09:42.148 8400.524 - 8460.102: 12.0896% ( 217) 00:09:42.148 8460.102 - 8519.680: 13.7942% ( 216) 00:09:42.148 8519.680 - 8579.258: 15.5382% ( 221) 00:09:42.148 8579.258 - 8638.836: 17.3927% ( 235) 00:09:42.148 8638.836 - 8698.415: 19.2393% ( 234) 00:09:42.148 8698.415 - 8757.993: 21.2674% ( 257) 00:09:42.148 8757.993 - 8817.571: 23.4691% ( 279) 00:09:42.148 8817.571 - 8877.149: 25.9312% ( 312) 00:09:42.148 8877.149 - 8936.727: 28.7642% ( 359) 00:09:42.148 8936.727 - 8996.305: 31.8024% ( 385) 00:09:42.148 8996.305 - 9055.884: 35.0931% ( 417) 00:09:42.148 9055.884 - 9115.462: 38.2339% ( 398) 00:09:42.148 9115.462 - 9175.040: 41.0511% ( 357) 00:09:42.148 9175.040 - 9234.618: 43.5369% ( 315) 00:09:42.148 9234.618 - 9294.196: 45.8491% ( 293) 00:09:42.148 9294.196 - 9353.775: 48.1613% ( 293) 00:09:42.148 9353.775 - 9413.353: 50.4498% ( 290) 00:09:42.148 9413.353 - 9472.931: 52.6910% ( 284) 00:09:42.148 9472.931 - 9532.509: 54.9321% ( 284) 00:09:42.148 9532.509 - 9592.087: 57.1181% ( 277) 00:09:42.148 9592.087 - 9651.665: 59.3040% ( 277) 00:09:42.148 9651.665 - 9711.244: 61.4978% ( 278) 00:09:42.148 9711.244 - 9770.822: 63.6206% ( 269) 00:09:42.148 9770.822 - 9830.400: 65.6881% ( 262) 00:09:42.148 9830.400 - 9889.978: 67.6373% ( 247) 00:09:42.148 9889.978 - 9949.556: 69.2866% ( 209) 00:09:42.148 9949.556 - 10009.135: 70.7860% ( 190) 00:09:42.148 10009.135 - 10068.713: 71.9855% ( 152) 00:09:42.148 10068.713 - 10128.291: 72.9324% ( 120) 00:09:42.148 10128.291 - 10187.869: 73.6742% ( 94) 00:09:42.148 10187.869 - 10247.447: 74.3056% ( 80) 00:09:42.148 10247.447 - 10307.025: 74.7633% ( 58) 00:09:42.148 10307.025 - 10366.604: 75.2920% ( 67) 00:09:42.148 10366.604 - 10426.182: 75.7655% ( 60) 00:09:42.148 10426.182 - 10485.760: 76.1206% ( 45) 00:09:42.148 10485.760 - 10545.338: 76.4362% ( 40) 00:09:42.148 10545.338 - 10604.916: 76.7361% ( 38) 00:09:42.148 10604.916 - 10664.495: 77.1386% ( 51) 00:09:42.148 10664.495 - 10724.073: 77.6042% ( 59) 00:09:42.148 10724.073 - 10783.651: 78.0934% ( 62) 00:09:42.148 10783.651 - 10843.229: 78.5669% ( 60) 00:09:42.148 10843.229 - 10902.807: 78.9931% ( 54) 00:09:42.148 10902.807 - 10962.385: 79.4586% ( 59) 00:09:42.148 10962.385 - 11021.964: 79.9558% ( 63) 00:09:42.148 11021.964 - 11081.542: 80.5161% ( 71) 00:09:42.148 11081.542 - 11141.120: 81.1790% ( 84) 00:09:42.148 11141.120 - 11200.698: 81.8497% ( 85) 00:09:42.148 11200.698 - 11260.276: 82.5126% ( 84) 00:09:42.148 11260.276 - 11319.855: 83.1834% ( 85) 00:09:42.148 11319.855 - 11379.433: 83.8068% ( 79) 00:09:42.148 11379.433 - 11439.011: 84.5092% ( 89) 00:09:42.148 11439.011 - 11498.589: 85.1484% ( 81) 00:09:42.148 11498.589 - 11558.167: 85.7560% ( 77) 00:09:42.148 11558.167 - 11617.745: 86.3952% ( 81) 00:09:42.148 11617.745 - 11677.324: 87.0107% ( 78) 00:09:42.148 11677.324 - 11736.902: 87.6105% ( 76) 00:09:42.148 11736.902 - 11796.480: 88.1234% ( 65) 00:09:42.148 11796.480 - 11856.058: 88.6206% ( 63) 00:09:42.148 11856.058 - 11915.636: 89.1020% ( 61) 00:09:42.148 11915.636 - 11975.215: 89.5281% ( 54) 00:09:42.148 11975.215 - 12034.793: 89.9384% ( 52) 00:09:42.148 12034.793 - 12094.371: 90.2936% ( 45) 00:09:42.148 12094.371 - 12153.949: 90.6092% ( 40) 00:09:42.148 12153.949 - 12213.527: 90.8617% ( 32) 00:09:42.148 12213.527 - 12273.105: 91.0748% ( 27) 00:09:42.148 12273.105 - 12332.684: 91.2879% ( 27) 00:09:42.148 12332.684 - 12392.262: 91.5325% ( 31) 00:09:42.148 12392.262 - 12451.840: 91.7377% ( 26) 00:09:42.148 12451.840 - 12511.418: 91.9586% ( 28) 00:09:42.148 12511.418 - 12570.996: 92.1954% ( 30) 00:09:42.148 12570.996 - 12630.575: 92.4242% ( 29) 00:09:42.148 12630.575 - 12690.153: 92.6294% ( 26) 00:09:42.148 12690.153 - 12749.731: 92.7794% ( 19) 00:09:42.148 12749.731 - 12809.309: 92.9372% ( 20) 00:09:42.148 12809.309 - 12868.887: 93.0713% ( 17) 00:09:42.148 12868.887 - 12928.465: 93.1976% ( 16) 00:09:42.148 12928.465 - 12988.044: 93.3002% ( 13) 00:09:42.148 12988.044 - 13047.622: 93.3870% ( 11) 00:09:42.148 13047.622 - 13107.200: 93.4501% ( 8) 00:09:42.148 13107.200 - 13166.778: 93.5369% ( 11) 00:09:42.148 13166.778 - 13226.356: 93.6237% ( 11) 00:09:42.148 13226.356 - 13285.935: 93.7184% ( 12) 00:09:42.148 13285.935 - 13345.513: 93.8131% ( 12) 00:09:42.148 13345.513 - 13405.091: 93.9315% ( 15) 00:09:42.148 13405.091 - 13464.669: 94.0420% ( 14) 00:09:42.148 13464.669 - 13524.247: 94.1525% ( 14) 00:09:42.148 13524.247 - 13583.825: 94.2551% ( 13) 00:09:42.148 13583.825 - 13643.404: 94.3655% ( 14) 00:09:42.148 13643.404 - 13702.982: 94.4602% ( 12) 00:09:42.148 13702.982 - 13762.560: 94.5628% ( 13) 00:09:42.148 13762.560 - 13822.138: 94.6575% ( 12) 00:09:42.148 13822.138 - 13881.716: 94.7601% ( 13) 00:09:42.148 13881.716 - 13941.295: 94.8390% ( 10) 00:09:42.148 13941.295 - 14000.873: 94.9100% ( 9) 00:09:42.148 14000.873 - 14060.451: 94.9811% ( 9) 00:09:42.148 14060.451 - 14120.029: 95.0363% ( 7) 00:09:42.148 14120.029 - 14179.607: 95.0994% ( 8) 00:09:42.148 14179.607 - 14239.185: 95.1783% ( 10) 00:09:42.148 14239.185 - 14298.764: 95.2494% ( 9) 00:09:42.148 14298.764 - 14358.342: 95.3283% ( 10) 00:09:42.148 14358.342 - 14417.920: 95.3914% ( 8) 00:09:42.148 14417.920 - 14477.498: 95.4624% ( 9) 00:09:42.148 14477.498 - 14537.076: 95.5019% ( 5) 00:09:42.148 14537.076 - 14596.655: 95.5335% ( 4) 00:09:42.148 14596.655 - 14656.233: 95.5729% ( 5) 00:09:42.148 14656.233 - 14715.811: 95.6045% ( 4) 00:09:42.148 14715.811 - 14775.389: 95.6203% ( 2) 00:09:42.148 14775.389 - 14834.967: 95.6360% ( 2) 00:09:42.148 14834.967 - 14894.545: 95.6676% ( 4) 00:09:42.148 14894.545 - 14954.124: 95.6992% ( 4) 00:09:42.148 14954.124 - 15013.702: 95.7229% ( 3) 00:09:42.148 15013.702 - 15073.280: 95.7544% ( 4) 00:09:42.148 15073.280 - 15132.858: 95.7781% ( 3) 00:09:42.148 15132.858 - 15192.436: 95.8097% ( 4) 00:09:42.148 15192.436 - 15252.015: 95.8412% ( 4) 00:09:42.148 15252.015 - 15371.171: 95.9359% ( 12) 00:09:42.148 15371.171 - 15490.327: 96.0701% ( 17) 00:09:42.148 15490.327 - 15609.484: 96.2279% ( 20) 00:09:42.148 15609.484 - 15728.640: 96.3778% ( 19) 00:09:42.148 15728.640 - 15847.796: 96.5436% ( 21) 00:09:42.148 15847.796 - 15966.953: 96.7330% ( 24) 00:09:42.148 15966.953 - 16086.109: 96.9145% ( 23) 00:09:42.148 16086.109 - 16205.265: 97.0960% ( 23) 00:09:42.148 16205.265 - 16324.422: 97.2617% ( 21) 00:09:42.148 16324.422 - 16443.578: 97.4195% ( 20) 00:09:42.148 16443.578 - 16562.735: 97.5931% ( 22) 00:09:42.148 16562.735 - 16681.891: 97.7431% ( 19) 00:09:42.148 16681.891 - 16801.047: 97.8693% ( 16) 00:09:42.148 16801.047 - 16920.204: 97.9877% ( 15) 00:09:42.148 16920.204 - 17039.360: 98.1140% ( 16) 00:09:42.148 17039.360 - 17158.516: 98.2086% ( 12) 00:09:42.148 17158.516 - 17277.673: 98.3112% ( 13) 00:09:42.148 17277.673 - 17396.829: 98.4138% ( 13) 00:09:42.148 17396.829 - 17515.985: 98.4927% ( 10) 00:09:42.148 17515.985 - 17635.142: 98.5953% ( 13) 00:09:42.148 17635.142 - 17754.298: 98.6821% ( 11) 00:09:42.148 17754.298 - 17873.455: 98.7847% ( 13) 00:09:42.148 17873.455 - 17992.611: 98.8873% ( 13) 00:09:42.148 17992.611 - 18111.767: 98.9504% ( 8) 00:09:42.148 18111.767 - 18230.924: 98.9978% ( 6) 00:09:42.148 18230.924 - 18350.080: 99.0372% ( 5) 00:09:42.148 18350.080 - 18469.236: 99.0925% ( 7) 00:09:42.148 18469.236 - 18588.393: 99.1398% ( 6) 00:09:42.148 18588.393 - 18707.549: 99.1793% ( 5) 00:09:42.148 18707.549 - 18826.705: 99.2030% ( 3) 00:09:42.148 18826.705 - 18945.862: 99.2266% ( 3) 00:09:42.148 18945.862 - 19065.018: 99.2582% ( 4) 00:09:42.148 19065.018 - 19184.175: 99.2740% ( 2) 00:09:42.148 19184.175 - 19303.331: 99.2977% ( 3) 00:09:42.148 19303.331 - 19422.487: 99.3292% ( 4) 00:09:42.148 19422.487 - 19541.644: 99.3450% ( 2) 00:09:42.148 19541.644 - 19660.800: 99.3687% ( 3) 00:09:42.148 19660.800 - 19779.956: 99.4003% ( 4) 00:09:42.148 19779.956 - 19899.113: 99.4239% ( 3) 00:09:42.148 19899.113 - 20018.269: 99.4555% ( 4) 00:09:42.148 20018.269 - 20137.425: 99.4792% ( 3) 00:09:42.148 20137.425 - 20256.582: 99.4949% ( 2) 00:09:42.148 25261.149 - 25380.305: 99.5107% ( 2) 00:09:42.148 25380.305 - 25499.462: 99.5344% ( 3) 00:09:42.148 25499.462 - 25618.618: 99.5581% ( 3) 00:09:42.148 25618.618 - 25737.775: 99.5818% ( 3) 00:09:42.148 25737.775 - 25856.931: 99.6054% ( 3) 00:09:42.148 25856.931 - 25976.087: 99.6212% ( 2) 00:09:42.148 25976.087 - 26095.244: 99.6449% ( 3) 00:09:42.148 26095.244 - 26214.400: 99.6686% ( 3) 00:09:42.148 26214.400 - 26333.556: 99.6843% ( 2) 00:09:42.148 26333.556 - 26452.713: 99.7080% ( 3) 00:09:42.148 26452.713 - 26571.869: 99.7317% ( 3) 00:09:42.148 26571.869 - 26691.025: 99.7554% ( 3) 00:09:42.149 26691.025 - 26810.182: 99.7711% ( 2) 00:09:42.149 26810.182 - 26929.338: 99.7948% ( 3) 00:09:42.149 26929.338 - 27048.495: 99.8106% ( 2) 00:09:42.149 27048.495 - 27167.651: 99.8343% ( 3) 00:09:42.149 27167.651 - 27286.807: 99.8501% ( 2) 00:09:42.149 27286.807 - 27405.964: 99.8658% ( 2) 00:09:42.149 27405.964 - 27525.120: 99.8895% ( 3) 00:09:42.149 27525.120 - 27644.276: 99.9132% ( 3) 00:09:42.149 27644.276 - 27763.433: 99.9290% ( 2) 00:09:42.149 27763.433 - 27882.589: 99.9527% ( 3) 00:09:42.149 27882.589 - 28001.745: 99.9684% ( 2) 00:09:42.149 28001.745 - 28120.902: 99.9921% ( 3) 00:09:42.149 28120.902 - 28240.058: 100.0000% ( 1) 00:09:42.149 00:09:42.149 18:54:13 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:43.526 Initializing NVMe Controllers 00:09:43.526 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:43.526 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:43.526 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:43.526 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:43.526 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:43.526 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:43.526 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:43.526 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:43.526 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:43.526 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:43.526 Initialization complete. Launching workers. 00:09:43.526 ======================================================== 00:09:43.526 Latency(us) 00:09:43.526 Device Information : IOPS MiB/s Average min max 00:09:43.526 PCIE (0000:00:10.0) NSID 1 from core 0: 10313.29 120.86 12440.97 8943.94 37974.80 00:09:43.526 PCIE (0000:00:11.0) NSID 1 from core 0: 10313.29 120.86 12420.09 9069.99 36082.25 00:09:43.526 PCIE (0000:00:13.0) NSID 1 from core 0: 10313.29 120.86 12398.62 9053.58 34844.42 00:09:43.526 PCIE (0000:00:12.0) NSID 1 from core 0: 10313.29 120.86 12377.50 9081.68 32963.29 00:09:43.526 PCIE (0000:00:12.0) NSID 2 from core 0: 10313.29 120.86 12356.46 9166.23 31220.67 00:09:43.526 PCIE (0000:00:12.0) NSID 3 from core 0: 10313.29 120.86 12335.46 9097.04 29430.83 00:09:43.526 ======================================================== 00:09:43.526 Total : 61879.73 725.15 12388.18 8943.94 37974.80 00:09:43.526 00:09:43.526 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:43.526 ================================================================================= 00:09:43.526 1.00000% : 9353.775us 00:09:43.526 10.00000% : 10187.869us 00:09:43.526 25.00000% : 10843.229us 00:09:43.526 50.00000% : 12034.793us 00:09:43.526 75.00000% : 13464.669us 00:09:43.526 90.00000% : 14775.389us 00:09:43.526 95.00000% : 15490.327us 00:09:43.526 98.00000% : 16205.265us 00:09:43.526 99.00000% : 27882.589us 00:09:43.526 99.50000% : 35985.222us 00:09:43.526 99.90000% : 37653.411us 00:09:43.526 99.99000% : 38130.036us 00:09:43.526 99.99900% : 38130.036us 00:09:43.526 99.99990% : 38130.036us 00:09:43.526 99.99999% : 38130.036us 00:09:43.526 00:09:43.526 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:43.526 ================================================================================= 00:09:43.526 1.00000% : 9472.931us 00:09:43.526 10.00000% : 10247.447us 00:09:43.526 25.00000% : 10902.807us 00:09:43.526 50.00000% : 12094.371us 00:09:43.526 75.00000% : 13464.669us 00:09:43.526 90.00000% : 14775.389us 00:09:43.526 95.00000% : 15490.327us 00:09:43.526 98.00000% : 16086.109us 00:09:43.526 99.00000% : 26929.338us 00:09:43.526 99.50000% : 34317.033us 00:09:43.526 99.90000% : 35746.909us 00:09:43.526 99.99000% : 36223.535us 00:09:43.526 99.99900% : 36223.535us 00:09:43.526 99.99990% : 36223.535us 00:09:43.526 99.99999% : 36223.535us 00:09:43.526 00:09:43.526 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:43.526 ================================================================================= 00:09:43.526 1.00000% : 9472.931us 00:09:43.526 10.00000% : 10247.447us 00:09:43.526 25.00000% : 10843.229us 00:09:43.526 50.00000% : 12094.371us 00:09:43.526 75.00000% : 13405.091us 00:09:43.526 90.00000% : 14715.811us 00:09:43.526 95.00000% : 15490.327us 00:09:43.526 98.00000% : 16086.109us 00:09:43.526 99.00000% : 25618.618us 00:09:43.526 99.50000% : 33125.469us 00:09:43.526 99.90000% : 34555.345us 00:09:43.526 99.99000% : 35031.971us 00:09:43.526 99.99900% : 35031.971us 00:09:43.526 99.99990% : 35031.971us 00:09:43.526 99.99999% : 35031.971us 00:09:43.526 00:09:43.526 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:43.526 ================================================================================= 00:09:43.526 1.00000% : 9472.931us 00:09:43.526 10.00000% : 10247.447us 00:09:43.526 25.00000% : 10902.807us 00:09:43.526 50.00000% : 12094.371us 00:09:43.526 75.00000% : 13405.091us 00:09:43.526 90.00000% : 14596.655us 00:09:43.526 95.00000% : 15371.171us 00:09:43.526 98.00000% : 15966.953us 00:09:43.526 99.00000% : 23712.116us 00:09:43.526 99.50000% : 31218.967us 00:09:43.526 99.90000% : 32648.844us 00:09:43.526 99.99000% : 33125.469us 00:09:43.526 99.99900% : 33125.469us 00:09:43.526 99.99990% : 33125.469us 00:09:43.526 99.99999% : 33125.469us 00:09:43.526 00:09:43.526 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:43.526 ================================================================================= 00:09:43.526 1.00000% : 9472.931us 00:09:43.526 10.00000% : 10247.447us 00:09:43.526 25.00000% : 10902.807us 00:09:43.526 50.00000% : 12094.371us 00:09:43.526 75.00000% : 13464.669us 00:09:43.526 90.00000% : 14656.233us 00:09:43.526 95.00000% : 15371.171us 00:09:43.526 98.00000% : 15966.953us 00:09:43.526 99.00000% : 21686.458us 00:09:43.526 99.50000% : 29431.622us 00:09:43.526 99.90000% : 30980.655us 00:09:43.526 99.99000% : 31218.967us 00:09:43.526 99.99900% : 31457.280us 00:09:43.526 99.99990% : 31457.280us 00:09:43.526 99.99999% : 31457.280us 00:09:43.526 00:09:43.526 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:43.526 ================================================================================= 00:09:43.526 1.00000% : 9472.931us 00:09:43.526 10.00000% : 10247.447us 00:09:43.526 25.00000% : 10902.807us 00:09:43.526 50.00000% : 12094.371us 00:09:43.526 75.00000% : 13405.091us 00:09:43.526 90.00000% : 14715.811us 00:09:43.526 95.00000% : 15371.171us 00:09:43.526 98.00000% : 15966.953us 00:09:43.526 99.00000% : 19899.113us 00:09:43.526 99.50000% : 27644.276us 00:09:43.526 99.90000% : 29074.153us 00:09:43.526 99.99000% : 29431.622us 00:09:43.526 99.99900% : 29431.622us 00:09:43.526 99.99990% : 29431.622us 00:09:43.526 99.99999% : 29431.622us 00:09:43.526 00:09:43.526 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:43.526 ============================================================================== 00:09:43.526 Range in us Cumulative IO count 00:09:43.526 8936.727 - 8996.305: 0.0386% ( 4) 00:09:43.526 8996.305 - 9055.884: 0.0579% ( 2) 00:09:43.526 9055.884 - 9115.462: 0.1543% ( 10) 00:09:43.526 9115.462 - 9175.040: 0.3279% ( 18) 00:09:43.526 9175.040 - 9234.618: 0.5112% ( 19) 00:09:43.526 9234.618 - 9294.196: 0.6848% ( 18) 00:09:43.526 9294.196 - 9353.775: 1.0031% ( 33) 00:09:43.526 9353.775 - 9413.353: 1.4178% ( 43) 00:09:43.526 9413.353 - 9472.931: 1.8519% ( 45) 00:09:43.526 9472.931 - 9532.509: 2.3534% ( 52) 00:09:43.527 9532.509 - 9592.087: 2.8935% ( 56) 00:09:43.527 9592.087 - 9651.665: 3.3951% ( 52) 00:09:43.527 9651.665 - 9711.244: 4.0316% ( 66) 00:09:43.527 9711.244 - 9770.822: 4.6200% ( 61) 00:09:43.527 9770.822 - 9830.400: 5.2758% ( 68) 00:09:43.527 9830.400 - 9889.978: 5.9703% ( 72) 00:09:43.527 9889.978 - 9949.556: 6.6454% ( 70) 00:09:43.527 9949.556 - 10009.135: 7.4460% ( 83) 00:09:43.527 10009.135 - 10068.713: 8.3044% ( 89) 00:09:43.527 10068.713 - 10128.291: 9.2978% ( 103) 00:09:43.527 10128.291 - 10187.869: 10.3395% ( 108) 00:09:43.527 10187.869 - 10247.447: 11.6609% ( 137) 00:09:43.527 10247.447 - 10307.025: 13.0498% ( 144) 00:09:43.527 10307.025 - 10366.604: 14.4869% ( 149) 00:09:43.527 10366.604 - 10426.182: 15.9336% ( 150) 00:09:43.527 10426.182 - 10485.760: 17.1971% ( 131) 00:09:43.527 10485.760 - 10545.338: 18.3835% ( 123) 00:09:43.527 10545.338 - 10604.916: 19.6373% ( 130) 00:09:43.527 10604.916 - 10664.495: 21.0455% ( 146) 00:09:43.527 10664.495 - 10724.073: 22.4537% ( 146) 00:09:43.527 10724.073 - 10783.651: 24.0355% ( 164) 00:09:43.527 10783.651 - 10843.229: 25.4147% ( 143) 00:09:43.527 10843.229 - 10902.807: 26.6686% ( 130) 00:09:43.527 10902.807 - 10962.385: 27.9514% ( 133) 00:09:43.527 10962.385 - 11021.964: 29.1570% ( 125) 00:09:43.527 11021.964 - 11081.542: 30.2855% ( 117) 00:09:43.527 11081.542 - 11141.120: 31.4043% ( 116) 00:09:43.527 11141.120 - 11200.698: 32.6292% ( 127) 00:09:43.527 11200.698 - 11260.276: 33.7191% ( 113) 00:09:43.527 11260.276 - 11319.855: 34.9923% ( 132) 00:09:43.527 11319.855 - 11379.433: 36.2172% ( 127) 00:09:43.527 11379.433 - 11439.011: 37.5193% ( 135) 00:09:43.527 11439.011 - 11498.589: 38.8792% ( 141) 00:09:43.527 11498.589 - 11558.167: 40.1910% ( 136) 00:09:43.527 11558.167 - 11617.745: 41.3002% ( 115) 00:09:43.527 11617.745 - 11677.324: 42.5444% ( 129) 00:09:43.527 11677.324 - 11736.902: 44.0008% ( 151) 00:09:43.527 11736.902 - 11796.480: 45.3511% ( 140) 00:09:43.527 11796.480 - 11856.058: 46.6049% ( 130) 00:09:43.527 11856.058 - 11915.636: 47.7431% ( 118) 00:09:43.527 11915.636 - 11975.215: 49.0355% ( 134) 00:09:43.527 11975.215 - 12034.793: 50.2604% ( 127) 00:09:43.527 12034.793 - 12094.371: 51.4371% ( 122) 00:09:43.527 12094.371 - 12153.949: 52.5656% ( 117) 00:09:43.527 12153.949 - 12213.527: 53.8291% ( 131) 00:09:43.527 12213.527 - 12273.105: 55.0154% ( 123) 00:09:43.527 12273.105 - 12332.684: 56.1150% ( 114) 00:09:43.527 12332.684 - 12392.262: 57.2338% ( 116) 00:09:43.527 12392.262 - 12451.840: 58.3044% ( 111) 00:09:43.527 12451.840 - 12511.418: 59.3171% ( 105) 00:09:43.527 12511.418 - 12570.996: 60.1852% ( 90) 00:09:43.527 12570.996 - 12630.575: 61.0822% ( 93) 00:09:43.527 12630.575 - 12690.153: 62.1817% ( 114) 00:09:43.527 12690.153 - 12749.731: 63.1752% ( 103) 00:09:43.527 12749.731 - 12809.309: 64.1686% ( 103) 00:09:43.527 12809.309 - 12868.887: 65.1427% ( 101) 00:09:43.527 12868.887 - 12928.465: 66.1844% ( 108) 00:09:43.527 12928.465 - 12988.044: 67.3322% ( 119) 00:09:43.527 12988.044 - 13047.622: 68.2581% ( 96) 00:09:43.527 13047.622 - 13107.200: 69.2805% ( 106) 00:09:43.527 13107.200 - 13166.778: 70.3511% ( 111) 00:09:43.527 13166.778 - 13226.356: 71.3542% ( 104) 00:09:43.527 13226.356 - 13285.935: 72.3283% ( 101) 00:09:43.527 13285.935 - 13345.513: 73.3603% ( 107) 00:09:43.527 13345.513 - 13405.091: 74.3538% ( 103) 00:09:43.527 13405.091 - 13464.669: 75.2508% ( 93) 00:09:43.527 13464.669 - 13524.247: 76.1285% ( 91) 00:09:43.527 13524.247 - 13583.825: 77.0448% ( 95) 00:09:43.527 13583.825 - 13643.404: 77.9128% ( 90) 00:09:43.527 13643.404 - 13702.982: 78.8484% ( 97) 00:09:43.527 13702.982 - 13762.560: 79.6586% ( 84) 00:09:43.527 13762.560 - 13822.138: 80.4398% ( 81) 00:09:43.527 13822.138 - 13881.716: 81.1728% ( 76) 00:09:43.527 13881.716 - 13941.295: 81.8673% ( 72) 00:09:43.527 13941.295 - 14000.873: 82.6100% ( 77) 00:09:43.527 14000.873 - 14060.451: 83.2658% ( 68) 00:09:43.527 14060.451 - 14120.029: 83.8831% ( 64) 00:09:43.527 14120.029 - 14179.607: 84.5486% ( 69) 00:09:43.527 14179.607 - 14239.185: 85.1370% ( 61) 00:09:43.527 14239.185 - 14298.764: 85.7735% ( 66) 00:09:43.527 14298.764 - 14358.342: 86.3715% ( 62) 00:09:43.527 14358.342 - 14417.920: 86.9599% ( 61) 00:09:43.527 14417.920 - 14477.498: 87.5000% ( 56) 00:09:43.527 14477.498 - 14537.076: 88.0883% ( 61) 00:09:43.527 14537.076 - 14596.655: 88.6960% ( 63) 00:09:43.527 14596.655 - 14656.233: 89.2940% ( 62) 00:09:43.527 14656.233 - 14715.811: 89.8341% ( 56) 00:09:43.527 14715.811 - 14775.389: 90.3839% ( 57) 00:09:43.527 14775.389 - 14834.967: 90.8854% ( 52) 00:09:43.527 14834.967 - 14894.545: 91.3773% ( 51) 00:09:43.527 14894.545 - 14954.124: 91.8692% ( 51) 00:09:43.527 14954.124 - 15013.702: 92.3611% ( 51) 00:09:43.527 15013.702 - 15073.280: 92.8337% ( 49) 00:09:43.527 15073.280 - 15132.858: 93.2099% ( 39) 00:09:43.527 15132.858 - 15192.436: 93.6053% ( 41) 00:09:43.527 15192.436 - 15252.015: 93.9911% ( 40) 00:09:43.527 15252.015 - 15371.171: 94.7049% ( 74) 00:09:43.527 15371.171 - 15490.327: 95.4282% ( 75) 00:09:43.527 15490.327 - 15609.484: 96.0455% ( 64) 00:09:43.527 15609.484 - 15728.640: 96.6242% ( 60) 00:09:43.527 15728.640 - 15847.796: 97.0872% ( 48) 00:09:43.527 15847.796 - 15966.953: 97.5598% ( 49) 00:09:43.527 15966.953 - 16086.109: 97.8974% ( 35) 00:09:43.527 16086.109 - 16205.265: 98.1964% ( 31) 00:09:43.527 16205.265 - 16324.422: 98.3893% ( 20) 00:09:43.527 16324.422 - 16443.578: 98.4761% ( 9) 00:09:43.527 16443.578 - 16562.735: 98.5822% ( 11) 00:09:43.527 16562.735 - 16681.891: 98.6690% ( 9) 00:09:43.527 16681.891 - 16801.047: 98.7269% ( 6) 00:09:43.527 16801.047 - 16920.204: 98.7654% ( 4) 00:09:43.527 26691.025 - 26810.182: 98.7751% ( 1) 00:09:43.527 26810.182 - 26929.338: 98.7944% ( 2) 00:09:43.527 26929.338 - 27048.495: 98.8329% ( 4) 00:09:43.527 27048.495 - 27167.651: 98.8619% ( 3) 00:09:43.527 27167.651 - 27286.807: 98.8908% ( 3) 00:09:43.527 27286.807 - 27405.964: 98.9198% ( 3) 00:09:43.527 27405.964 - 27525.120: 98.9390% ( 2) 00:09:43.527 27525.120 - 27644.276: 98.9680% ( 3) 00:09:43.527 27644.276 - 27763.433: 98.9969% ( 3) 00:09:43.527 27763.433 - 27882.589: 99.0258% ( 3) 00:09:43.527 27882.589 - 28001.745: 99.0644% ( 4) 00:09:43.527 28001.745 - 28120.902: 99.0837% ( 2) 00:09:43.527 28120.902 - 28240.058: 99.1030% ( 2) 00:09:43.527 28240.058 - 28359.215: 99.1416% ( 4) 00:09:43.527 28359.215 - 28478.371: 99.1609% ( 2) 00:09:43.527 28478.371 - 28597.527: 99.1995% ( 4) 00:09:43.527 28597.527 - 28716.684: 99.2284% ( 3) 00:09:43.527 28716.684 - 28835.840: 99.2573% ( 3) 00:09:43.527 28835.840 - 28954.996: 99.2766% ( 2) 00:09:43.527 28954.996 - 29074.153: 99.3152% ( 4) 00:09:43.527 29074.153 - 29193.309: 99.3441% ( 3) 00:09:43.527 29193.309 - 29312.465: 99.3827% ( 4) 00:09:43.527 35270.284 - 35508.596: 99.3924% ( 1) 00:09:43.527 35508.596 - 35746.909: 99.4502% ( 6) 00:09:43.527 35746.909 - 35985.222: 99.5081% ( 6) 00:09:43.527 35985.222 - 36223.535: 99.5853% ( 8) 00:09:43.527 36223.535 - 36461.847: 99.6335% ( 5) 00:09:43.527 36461.847 - 36700.160: 99.6817% ( 5) 00:09:43.527 36700.160 - 36938.473: 99.7396% ( 6) 00:09:43.527 36938.473 - 37176.785: 99.8167% ( 8) 00:09:43.527 37176.785 - 37415.098: 99.8650% ( 5) 00:09:43.527 37415.098 - 37653.411: 99.9228% ( 6) 00:09:43.527 37653.411 - 37891.724: 99.9807% ( 6) 00:09:43.527 37891.724 - 38130.036: 100.0000% ( 2) 00:09:43.527 00:09:43.527 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:43.527 ============================================================================== 00:09:43.527 Range in us Cumulative IO count 00:09:43.527 9055.884 - 9115.462: 0.0675% ( 7) 00:09:43.527 9115.462 - 9175.040: 0.1157% ( 5) 00:09:43.527 9175.040 - 9234.618: 0.1543% ( 4) 00:09:43.527 9234.618 - 9294.196: 0.2894% ( 14) 00:09:43.527 9294.196 - 9353.775: 0.6655% ( 39) 00:09:43.527 9353.775 - 9413.353: 0.9356% ( 28) 00:09:43.527 9413.353 - 9472.931: 1.1863% ( 26) 00:09:43.527 9472.931 - 9532.509: 1.6782% ( 51) 00:09:43.527 9532.509 - 9592.087: 2.1701% ( 51) 00:09:43.527 9592.087 - 9651.665: 2.6717% ( 52) 00:09:43.527 9651.665 - 9711.244: 3.3275% ( 68) 00:09:43.527 9711.244 - 9770.822: 4.1281% ( 83) 00:09:43.527 9770.822 - 9830.400: 4.9286% ( 83) 00:09:43.527 9830.400 - 9889.978: 5.7485% ( 85) 00:09:43.527 9889.978 - 9949.556: 6.4333% ( 71) 00:09:43.527 9949.556 - 10009.135: 7.2145% ( 81) 00:09:43.527 10009.135 - 10068.713: 7.9958% ( 81) 00:09:43.527 10068.713 - 10128.291: 8.9603% ( 100) 00:09:43.527 10128.291 - 10187.869: 9.8862% ( 96) 00:09:43.527 10187.869 - 10247.447: 10.9086% ( 106) 00:09:43.527 10247.447 - 10307.025: 12.0081% ( 114) 00:09:43.527 10307.025 - 10366.604: 13.4259% ( 147) 00:09:43.527 10366.604 - 10426.182: 14.9113% ( 154) 00:09:43.527 10426.182 - 10485.760: 16.4834% ( 163) 00:09:43.527 10485.760 - 10545.338: 18.1134% ( 169) 00:09:43.527 10545.338 - 10604.916: 19.6663% ( 161) 00:09:43.527 10604.916 - 10664.495: 21.0455% ( 143) 00:09:43.527 10664.495 - 10724.073: 22.3476% ( 135) 00:09:43.527 10724.073 - 10783.651: 23.6400% ( 134) 00:09:43.527 10783.651 - 10843.229: 24.8167% ( 122) 00:09:43.527 10843.229 - 10902.807: 26.1574% ( 139) 00:09:43.527 10902.807 - 10962.385: 27.8260% ( 173) 00:09:43.527 10962.385 - 11021.964: 29.2342% ( 146) 00:09:43.527 11021.964 - 11081.542: 30.7292% ( 155) 00:09:43.527 11081.542 - 11141.120: 32.1566% ( 148) 00:09:43.527 11141.120 - 11200.698: 33.4684% ( 136) 00:09:43.527 11200.698 - 11260.276: 34.7126% ( 129) 00:09:43.528 11260.276 - 11319.855: 36.0050% ( 134) 00:09:43.528 11319.855 - 11379.433: 37.1721% ( 121) 00:09:43.528 11379.433 - 11439.011: 38.3391% ( 121) 00:09:43.528 11439.011 - 11498.589: 39.4676% ( 117) 00:09:43.528 11498.589 - 11558.167: 40.5671% ( 114) 00:09:43.528 11558.167 - 11617.745: 41.7245% ( 120) 00:09:43.528 11617.745 - 11677.324: 42.9880% ( 131) 00:09:43.528 11677.324 - 11736.902: 44.0008% ( 105) 00:09:43.528 11736.902 - 11796.480: 44.8978% ( 93) 00:09:43.528 11796.480 - 11856.058: 45.9877% ( 113) 00:09:43.528 11856.058 - 11915.636: 47.0968% ( 115) 00:09:43.528 11915.636 - 11975.215: 48.5050% ( 146) 00:09:43.528 11975.215 - 12034.793: 49.8843% ( 143) 00:09:43.528 12034.793 - 12094.371: 51.4082% ( 158) 00:09:43.528 12094.371 - 12153.949: 52.7103% ( 135) 00:09:43.528 12153.949 - 12213.527: 54.0413% ( 138) 00:09:43.528 12213.527 - 12273.105: 55.2469% ( 125) 00:09:43.528 12273.105 - 12332.684: 56.2982% ( 109) 00:09:43.528 12332.684 - 12392.262: 57.4267% ( 117) 00:09:43.528 12392.262 - 12451.840: 58.4973% ( 111) 00:09:43.528 12451.840 - 12511.418: 59.7126% ( 126) 00:09:43.528 12511.418 - 12570.996: 60.8121% ( 114) 00:09:43.528 12570.996 - 12630.575: 62.0081% ( 124) 00:09:43.528 12630.575 - 12690.153: 63.3295% ( 137) 00:09:43.528 12690.153 - 12749.731: 64.4483% ( 116) 00:09:43.528 12749.731 - 12809.309: 65.5285% ( 112) 00:09:43.528 12809.309 - 12868.887: 66.5027% ( 101) 00:09:43.528 12868.887 - 12928.465: 67.4672% ( 100) 00:09:43.528 12928.465 - 12988.044: 68.4317% ( 100) 00:09:43.528 12988.044 - 13047.622: 69.3673% ( 97) 00:09:43.528 13047.622 - 13107.200: 70.2836% ( 95) 00:09:43.528 13107.200 - 13166.778: 71.1806% ( 93) 00:09:43.528 13166.778 - 13226.356: 72.2415% ( 110) 00:09:43.528 13226.356 - 13285.935: 73.1289% ( 92) 00:09:43.528 13285.935 - 13345.513: 73.9873% ( 89) 00:09:43.528 13345.513 - 13405.091: 74.8264% ( 87) 00:09:43.528 13405.091 - 13464.669: 75.6269% ( 83) 00:09:43.528 13464.669 - 13524.247: 76.3792% ( 78) 00:09:43.528 13524.247 - 13583.825: 77.1701% ( 82) 00:09:43.528 13583.825 - 13643.404: 78.0189% ( 88) 00:09:43.528 13643.404 - 13702.982: 78.7326% ( 74) 00:09:43.528 13702.982 - 13762.560: 79.4753% ( 77) 00:09:43.528 13762.560 - 13822.138: 80.2180% ( 77) 00:09:43.528 13822.138 - 13881.716: 81.1053% ( 92) 00:09:43.528 13881.716 - 13941.295: 81.8769% ( 80) 00:09:43.528 13941.295 - 14000.873: 82.5714% ( 72) 00:09:43.528 14000.873 - 14060.451: 83.2079% ( 66) 00:09:43.528 14060.451 - 14120.029: 83.8831% ( 70) 00:09:43.528 14120.029 - 14179.607: 84.4425% ( 58) 00:09:43.528 14179.607 - 14239.185: 85.0309% ( 61) 00:09:43.528 14239.185 - 14298.764: 85.7542% ( 75) 00:09:43.528 14298.764 - 14358.342: 86.3715% ( 64) 00:09:43.528 14358.342 - 14417.920: 86.9888% ( 64) 00:09:43.528 14417.920 - 14477.498: 87.5772% ( 61) 00:09:43.528 14477.498 - 14537.076: 88.1559% ( 60) 00:09:43.528 14537.076 - 14596.655: 88.7056% ( 57) 00:09:43.528 14596.655 - 14656.233: 89.2072% ( 52) 00:09:43.528 14656.233 - 14715.811: 89.6701% ( 48) 00:09:43.528 14715.811 - 14775.389: 90.1331% ( 48) 00:09:43.528 14775.389 - 14834.967: 90.6154% ( 50) 00:09:43.528 14834.967 - 14894.545: 91.0783% ( 48) 00:09:43.528 14894.545 - 14954.124: 91.5606% ( 50) 00:09:43.528 14954.124 - 15013.702: 92.0621% ( 52) 00:09:43.528 15013.702 - 15073.280: 92.4865% ( 44) 00:09:43.528 15073.280 - 15132.858: 92.9784% ( 51) 00:09:43.528 15132.858 - 15192.436: 93.4414% ( 48) 00:09:43.528 15192.436 - 15252.015: 93.8754% ( 45) 00:09:43.528 15252.015 - 15371.171: 94.6470% ( 80) 00:09:43.528 15371.171 - 15490.327: 95.4090% ( 79) 00:09:43.528 15490.327 - 15609.484: 96.1130% ( 73) 00:09:43.528 15609.484 - 15728.640: 96.8268% ( 74) 00:09:43.528 15728.640 - 15847.796: 97.3573% ( 55) 00:09:43.528 15847.796 - 15966.953: 97.7816% ( 44) 00:09:43.528 15966.953 - 16086.109: 98.1289% ( 36) 00:09:43.528 16086.109 - 16205.265: 98.3507% ( 23) 00:09:43.528 16205.265 - 16324.422: 98.4761% ( 13) 00:09:43.528 16324.422 - 16443.578: 98.5918% ( 12) 00:09:43.528 16443.578 - 16562.735: 98.6979% ( 11) 00:09:43.528 16562.735 - 16681.891: 98.7654% ( 7) 00:09:43.528 25856.931 - 25976.087: 98.7751% ( 1) 00:09:43.528 25976.087 - 26095.244: 98.8040% ( 3) 00:09:43.528 26095.244 - 26214.400: 98.8329% ( 3) 00:09:43.528 26214.400 - 26333.556: 98.8619% ( 3) 00:09:43.528 26333.556 - 26452.713: 98.9005% ( 4) 00:09:43.528 26452.713 - 26571.869: 98.9294% ( 3) 00:09:43.528 26571.869 - 26691.025: 98.9680% ( 4) 00:09:43.528 26691.025 - 26810.182: 98.9969% ( 3) 00:09:43.528 26810.182 - 26929.338: 99.0355% ( 4) 00:09:43.528 26929.338 - 27048.495: 99.0644% ( 3) 00:09:43.528 27048.495 - 27167.651: 99.0837% ( 2) 00:09:43.528 27167.651 - 27286.807: 99.1223% ( 4) 00:09:43.528 27286.807 - 27405.964: 99.1512% ( 3) 00:09:43.528 27405.964 - 27525.120: 99.1898% ( 4) 00:09:43.528 27525.120 - 27644.276: 99.2188% ( 3) 00:09:43.528 27644.276 - 27763.433: 99.2477% ( 3) 00:09:43.528 27763.433 - 27882.589: 99.2766% ( 3) 00:09:43.528 27882.589 - 28001.745: 99.3056% ( 3) 00:09:43.528 28001.745 - 28120.902: 99.3345% ( 3) 00:09:43.528 28120.902 - 28240.058: 99.3731% ( 4) 00:09:43.528 28240.058 - 28359.215: 99.3827% ( 1) 00:09:43.528 33602.095 - 33840.407: 99.3924% ( 1) 00:09:43.528 33840.407 - 34078.720: 99.4599% ( 7) 00:09:43.528 34078.720 - 34317.033: 99.5274% ( 7) 00:09:43.528 34317.033 - 34555.345: 99.5853% ( 6) 00:09:43.528 34555.345 - 34793.658: 99.6431% ( 6) 00:09:43.528 34793.658 - 35031.971: 99.7010% ( 6) 00:09:43.528 35031.971 - 35270.284: 99.7685% ( 7) 00:09:43.528 35270.284 - 35508.596: 99.8360% ( 7) 00:09:43.528 35508.596 - 35746.909: 99.9035% ( 7) 00:09:43.528 35746.909 - 35985.222: 99.9711% ( 7) 00:09:43.528 35985.222 - 36223.535: 100.0000% ( 3) 00:09:43.528 00:09:43.528 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:43.528 ============================================================================== 00:09:43.528 Range in us Cumulative IO count 00:09:43.528 8996.305 - 9055.884: 0.0193% ( 2) 00:09:43.528 9055.884 - 9115.462: 0.0675% ( 5) 00:09:43.528 9115.462 - 9175.040: 0.1157% ( 5) 00:09:43.528 9175.040 - 9234.618: 0.1833% ( 7) 00:09:43.528 9234.618 - 9294.196: 0.2894% ( 11) 00:09:43.528 9294.196 - 9353.775: 0.5112% ( 23) 00:09:43.528 9353.775 - 9413.353: 0.8488% ( 35) 00:09:43.528 9413.353 - 9472.931: 1.1574% ( 32) 00:09:43.528 9472.931 - 9532.509: 1.4564% ( 31) 00:09:43.528 9532.509 - 9592.087: 1.9194% ( 48) 00:09:43.528 9592.087 - 9651.665: 2.5752% ( 68) 00:09:43.528 9651.665 - 9711.244: 3.2890% ( 74) 00:09:43.528 9711.244 - 9770.822: 3.9352% ( 67) 00:09:43.528 9770.822 - 9830.400: 4.6779% ( 77) 00:09:43.528 9830.400 - 9889.978: 5.4109% ( 76) 00:09:43.528 9889.978 - 9949.556: 6.0957% ( 71) 00:09:43.528 9949.556 - 10009.135: 6.7323% ( 66) 00:09:43.528 10009.135 - 10068.713: 7.5424% ( 84) 00:09:43.528 10068.713 - 10128.291: 8.4684% ( 96) 00:09:43.528 10128.291 - 10187.869: 9.4136% ( 98) 00:09:43.528 10187.869 - 10247.447: 10.4070% ( 103) 00:09:43.528 10247.447 - 10307.025: 11.5934% ( 123) 00:09:43.528 10307.025 - 10366.604: 12.9051% ( 136) 00:09:43.528 10366.604 - 10426.182: 14.4001% ( 155) 00:09:43.528 10426.182 - 10485.760: 16.0783% ( 174) 00:09:43.528 10485.760 - 10545.338: 17.6794% ( 166) 00:09:43.528 10545.338 - 10604.916: 19.2515% ( 163) 00:09:43.528 10604.916 - 10664.495: 20.7272% ( 153) 00:09:43.528 10664.495 - 10724.073: 22.1644% ( 149) 00:09:43.528 10724.073 - 10783.651: 23.6304% ( 152) 00:09:43.528 10783.651 - 10843.229: 25.0675% ( 149) 00:09:43.528 10843.229 - 10902.807: 26.4082% ( 139) 00:09:43.528 10902.807 - 10962.385: 27.7778% ( 142) 00:09:43.528 10962.385 - 11021.964: 29.1088% ( 138) 00:09:43.528 11021.964 - 11081.542: 30.3434% ( 128) 00:09:43.528 11081.542 - 11141.120: 31.5683% ( 127) 00:09:43.528 11141.120 - 11200.698: 32.8800% ( 136) 00:09:43.528 11200.698 - 11260.276: 34.0953% ( 126) 00:09:43.528 11260.276 - 11319.855: 35.3395% ( 129) 00:09:43.528 11319.855 - 11379.433: 36.5548% ( 126) 00:09:43.528 11379.433 - 11439.011: 37.8086% ( 130) 00:09:43.528 11439.011 - 11498.589: 38.9950% ( 123) 00:09:43.528 11498.589 - 11558.167: 40.2585% ( 131) 00:09:43.528 11558.167 - 11617.745: 41.4062% ( 119) 00:09:43.528 11617.745 - 11677.324: 42.6215% ( 126) 00:09:43.528 11677.324 - 11736.902: 43.6632% ( 108) 00:09:43.528 11736.902 - 11796.480: 44.7627% ( 114) 00:09:43.528 11796.480 - 11856.058: 45.7658% ( 104) 00:09:43.528 11856.058 - 11915.636: 46.9522% ( 123) 00:09:43.528 11915.636 - 11975.215: 48.0710% ( 116) 00:09:43.528 11975.215 - 12034.793: 49.4020% ( 138) 00:09:43.528 12034.793 - 12094.371: 50.7523% ( 140) 00:09:43.528 12094.371 - 12153.949: 52.1219% ( 142) 00:09:43.528 12153.949 - 12213.527: 53.4433% ( 137) 00:09:43.528 12213.527 - 12273.105: 54.6779% ( 128) 00:09:43.528 12273.105 - 12332.684: 55.7485% ( 111) 00:09:43.528 12332.684 - 12392.262: 56.8480% ( 114) 00:09:43.528 12392.262 - 12451.840: 58.1019% ( 130) 00:09:43.528 12451.840 - 12511.418: 59.2978% ( 124) 00:09:43.528 12511.418 - 12570.996: 60.6481% ( 140) 00:09:43.528 12570.996 - 12630.575: 61.9117% ( 131) 00:09:43.528 12630.575 - 12690.153: 63.1559% ( 129) 00:09:43.528 12690.153 - 12749.731: 64.3133% ( 120) 00:09:43.528 12749.731 - 12809.309: 65.4225% ( 115) 00:09:43.528 12809.309 - 12868.887: 66.6281% ( 125) 00:09:43.528 12868.887 - 12928.465: 67.6698% ( 108) 00:09:43.528 12928.465 - 12988.044: 68.6439% ( 101) 00:09:43.528 12988.044 - 13047.622: 69.6856% ( 108) 00:09:43.528 13047.622 - 13107.200: 70.5729% ( 92) 00:09:43.528 13107.200 - 13166.778: 71.5856% ( 105) 00:09:43.529 13166.778 - 13226.356: 72.5694% ( 102) 00:09:43.529 13226.356 - 13285.935: 73.5050% ( 97) 00:09:43.529 13285.935 - 13345.513: 74.4406% ( 97) 00:09:43.529 13345.513 - 13405.091: 75.3472% ( 94) 00:09:43.529 13405.091 - 13464.669: 76.3696% ( 106) 00:09:43.529 13464.669 - 13524.247: 77.3148% ( 98) 00:09:43.529 13524.247 - 13583.825: 78.1636% ( 88) 00:09:43.529 13583.825 - 13643.404: 78.8773% ( 74) 00:09:43.529 13643.404 - 13702.982: 79.5235% ( 67) 00:09:43.529 13702.982 - 13762.560: 80.2469% ( 75) 00:09:43.529 13762.560 - 13822.138: 81.1053% ( 89) 00:09:43.529 13822.138 - 13881.716: 82.0120% ( 94) 00:09:43.529 13881.716 - 13941.295: 82.7932% ( 81) 00:09:43.529 13941.295 - 14000.873: 83.5359% ( 77) 00:09:43.529 14000.873 - 14060.451: 84.2014% ( 69) 00:09:43.529 14060.451 - 14120.029: 84.8573% ( 68) 00:09:43.529 14120.029 - 14179.607: 85.4745% ( 64) 00:09:43.529 14179.607 - 14239.185: 86.1304% ( 68) 00:09:43.529 14239.185 - 14298.764: 86.8056% ( 70) 00:09:43.529 14298.764 - 14358.342: 87.4228% ( 64) 00:09:43.529 14358.342 - 14417.920: 88.0691% ( 67) 00:09:43.529 14417.920 - 14477.498: 88.5417% ( 49) 00:09:43.529 14477.498 - 14537.076: 89.0046% ( 48) 00:09:43.529 14537.076 - 14596.655: 89.5255% ( 54) 00:09:43.529 14596.655 - 14656.233: 89.9884% ( 48) 00:09:43.529 14656.233 - 14715.811: 90.4514% ( 48) 00:09:43.529 14715.811 - 14775.389: 90.9529% ( 52) 00:09:43.529 14775.389 - 14834.967: 91.3002% ( 36) 00:09:43.529 14834.967 - 14894.545: 91.5992% ( 31) 00:09:43.529 14894.545 - 14954.124: 91.9850% ( 40) 00:09:43.529 14954.124 - 15013.702: 92.4479% ( 48) 00:09:43.529 15013.702 - 15073.280: 92.9495% ( 52) 00:09:43.529 15073.280 - 15132.858: 93.4028% ( 47) 00:09:43.529 15132.858 - 15192.436: 93.8079% ( 42) 00:09:43.529 15192.436 - 15252.015: 94.1551% ( 36) 00:09:43.529 15252.015 - 15371.171: 94.8592% ( 73) 00:09:43.529 15371.171 - 15490.327: 95.6115% ( 78) 00:09:43.529 15490.327 - 15609.484: 96.2577% ( 67) 00:09:43.529 15609.484 - 15728.640: 96.8943% ( 66) 00:09:43.529 15728.640 - 15847.796: 97.4248% ( 55) 00:09:43.529 15847.796 - 15966.953: 97.8492% ( 44) 00:09:43.529 15966.953 - 16086.109: 98.0324% ( 19) 00:09:43.529 16086.109 - 16205.265: 98.1964% ( 17) 00:09:43.529 16205.265 - 16324.422: 98.3507% ( 16) 00:09:43.529 16324.422 - 16443.578: 98.5147% ( 17) 00:09:43.529 16443.578 - 16562.735: 98.6208% ( 11) 00:09:43.529 16562.735 - 16681.891: 98.7269% ( 11) 00:09:43.529 16681.891 - 16801.047: 98.7654% ( 4) 00:09:43.529 24546.211 - 24665.367: 98.7847% ( 2) 00:09:43.529 24665.367 - 24784.524: 98.8137% ( 3) 00:09:43.529 24784.524 - 24903.680: 98.8426% ( 3) 00:09:43.529 24903.680 - 25022.836: 98.8715% ( 3) 00:09:43.529 25022.836 - 25141.993: 98.9101% ( 4) 00:09:43.529 25141.993 - 25261.149: 98.9390% ( 3) 00:09:43.529 25261.149 - 25380.305: 98.9680% ( 3) 00:09:43.529 25380.305 - 25499.462: 98.9969% ( 3) 00:09:43.529 25499.462 - 25618.618: 99.0355% ( 4) 00:09:43.529 25618.618 - 25737.775: 99.0644% ( 3) 00:09:43.529 25737.775 - 25856.931: 99.0934% ( 3) 00:09:43.529 25856.931 - 25976.087: 99.1223% ( 3) 00:09:43.529 25976.087 - 26095.244: 99.1609% ( 4) 00:09:43.529 26095.244 - 26214.400: 99.1898% ( 3) 00:09:43.529 26214.400 - 26333.556: 99.2284% ( 4) 00:09:43.529 26333.556 - 26452.713: 99.2573% ( 3) 00:09:43.529 26452.713 - 26571.869: 99.2959% ( 4) 00:09:43.529 26571.869 - 26691.025: 99.3248% ( 3) 00:09:43.529 26691.025 - 26810.182: 99.3634% ( 4) 00:09:43.529 26810.182 - 26929.338: 99.3827% ( 2) 00:09:43.529 32410.531 - 32648.844: 99.4213% ( 4) 00:09:43.529 32648.844 - 32887.156: 99.4792% ( 6) 00:09:43.529 32887.156 - 33125.469: 99.5370% ( 6) 00:09:43.529 33125.469 - 33363.782: 99.6046% ( 7) 00:09:43.529 33363.782 - 33602.095: 99.6721% ( 7) 00:09:43.529 33602.095 - 33840.407: 99.7299% ( 6) 00:09:43.529 33840.407 - 34078.720: 99.7975% ( 7) 00:09:43.529 34078.720 - 34317.033: 99.8650% ( 7) 00:09:43.529 34317.033 - 34555.345: 99.9228% ( 6) 00:09:43.529 34555.345 - 34793.658: 99.9807% ( 6) 00:09:43.529 34793.658 - 35031.971: 100.0000% ( 2) 00:09:43.529 00:09:43.529 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:43.529 ============================================================================== 00:09:43.529 Range in us Cumulative IO count 00:09:43.529 9055.884 - 9115.462: 0.0675% ( 7) 00:09:43.529 9115.462 - 9175.040: 0.1061% ( 4) 00:09:43.529 9175.040 - 9234.618: 0.1640% ( 6) 00:09:43.529 9234.618 - 9294.196: 0.2604% ( 10) 00:09:43.529 9294.196 - 9353.775: 0.5305% ( 28) 00:09:43.529 9353.775 - 9413.353: 0.8198% ( 30) 00:09:43.529 9413.353 - 9472.931: 1.0706% ( 26) 00:09:43.529 9472.931 - 9532.509: 1.4660% ( 41) 00:09:43.529 9532.509 - 9592.087: 1.9869% ( 54) 00:09:43.529 9592.087 - 9651.665: 2.6138% ( 65) 00:09:43.529 9651.665 - 9711.244: 3.3758% ( 79) 00:09:43.529 9711.244 - 9770.822: 4.0123% ( 66) 00:09:43.529 9770.822 - 9830.400: 4.6779% ( 69) 00:09:43.529 9830.400 - 9889.978: 5.5073% ( 86) 00:09:43.529 9889.978 - 9949.556: 6.2596% ( 78) 00:09:43.529 9949.556 - 10009.135: 6.9830% ( 75) 00:09:43.529 10009.135 - 10068.713: 7.6485% ( 69) 00:09:43.529 10068.713 - 10128.291: 8.4491% ( 83) 00:09:43.529 10128.291 - 10187.869: 9.3171% ( 90) 00:09:43.529 10187.869 - 10247.447: 10.2913% ( 101) 00:09:43.529 10247.447 - 10307.025: 11.3426% ( 109) 00:09:43.529 10307.025 - 10366.604: 12.6061% ( 131) 00:09:43.529 10366.604 - 10426.182: 14.0336% ( 148) 00:09:43.529 10426.182 - 10485.760: 15.5768% ( 160) 00:09:43.529 10485.760 - 10545.338: 17.0525% ( 153) 00:09:43.529 10545.338 - 10604.916: 18.6053% ( 161) 00:09:43.529 10604.916 - 10664.495: 19.9749% ( 142) 00:09:43.529 10664.495 - 10724.073: 21.2288% ( 130) 00:09:43.529 10724.073 - 10783.651: 22.5887% ( 141) 00:09:43.529 10783.651 - 10843.229: 24.0451% ( 151) 00:09:43.529 10843.229 - 10902.807: 25.5401% ( 155) 00:09:43.529 10902.807 - 10962.385: 27.0640% ( 158) 00:09:43.529 10962.385 - 11021.964: 28.5687% ( 156) 00:09:43.529 11021.964 - 11081.542: 30.0540% ( 154) 00:09:43.529 11081.542 - 11141.120: 31.4140% ( 141) 00:09:43.529 11141.120 - 11200.698: 32.7546% ( 139) 00:09:43.529 11200.698 - 11260.276: 33.9892% ( 128) 00:09:43.529 11260.276 - 11319.855: 35.2623% ( 132) 00:09:43.529 11319.855 - 11379.433: 36.4776% ( 126) 00:09:43.529 11379.433 - 11439.011: 37.6833% ( 125) 00:09:43.529 11439.011 - 11498.589: 38.8503% ( 121) 00:09:43.529 11498.589 - 11558.167: 39.8727% ( 106) 00:09:43.529 11558.167 - 11617.745: 40.8372% ( 100) 00:09:43.529 11617.745 - 11677.324: 41.7149% ( 91) 00:09:43.529 11677.324 - 11736.902: 42.7758% ( 110) 00:09:43.529 11736.902 - 11796.480: 43.8754% ( 114) 00:09:43.529 11796.480 - 11856.058: 45.1100% ( 128) 00:09:43.529 11856.058 - 11915.636: 46.2867% ( 122) 00:09:43.529 11915.636 - 11975.215: 47.6080% ( 137) 00:09:43.529 11975.215 - 12034.793: 48.9101% ( 135) 00:09:43.529 12034.793 - 12094.371: 50.1929% ( 133) 00:09:43.529 12094.371 - 12153.949: 51.5336% ( 139) 00:09:43.529 12153.949 - 12213.527: 52.7971% ( 131) 00:09:43.529 12213.527 - 12273.105: 54.1860% ( 144) 00:09:43.529 12273.105 - 12332.684: 55.4012% ( 126) 00:09:43.529 12332.684 - 12392.262: 56.5201% ( 116) 00:09:43.529 12392.262 - 12451.840: 57.8221% ( 135) 00:09:43.529 12451.840 - 12511.418: 58.9988% ( 122) 00:09:43.529 12511.418 - 12570.996: 60.1948% ( 124) 00:09:43.529 12570.996 - 12630.575: 61.3812% ( 123) 00:09:43.529 12630.575 - 12690.153: 62.5579% ( 122) 00:09:43.529 12690.153 - 12749.731: 63.6960% ( 118) 00:09:43.529 12749.731 - 12809.309: 64.8630% ( 121) 00:09:43.529 12809.309 - 12868.887: 65.9433% ( 112) 00:09:43.529 12868.887 - 12928.465: 67.0428% ( 114) 00:09:43.529 12928.465 - 12988.044: 68.2388% ( 124) 00:09:43.529 12988.044 - 13047.622: 69.3287% ( 113) 00:09:43.529 13047.622 - 13107.200: 70.3125% ( 102) 00:09:43.529 13107.200 - 13166.778: 71.3445% ( 107) 00:09:43.529 13166.778 - 13226.356: 72.4826% ( 118) 00:09:43.529 13226.356 - 13285.935: 73.3989% ( 95) 00:09:43.529 13285.935 - 13345.513: 74.4406% ( 108) 00:09:43.529 13345.513 - 13405.091: 75.2411% ( 83) 00:09:43.529 13405.091 - 13464.669: 76.2056% ( 100) 00:09:43.529 13464.669 - 13524.247: 77.1798% ( 101) 00:09:43.529 13524.247 - 13583.825: 78.1539% ( 101) 00:09:43.529 13583.825 - 13643.404: 78.9834% ( 86) 00:09:43.529 13643.404 - 13702.982: 79.6971% ( 74) 00:09:43.529 13702.982 - 13762.560: 80.4302% ( 76) 00:09:43.529 13762.560 - 13822.138: 81.2596% ( 86) 00:09:43.529 13822.138 - 13881.716: 82.0120% ( 78) 00:09:43.529 13881.716 - 13941.295: 82.7353% ( 75) 00:09:43.529 13941.295 - 14000.873: 83.5359% ( 83) 00:09:43.529 14000.873 - 14060.451: 84.3268% ( 82) 00:09:43.529 14060.451 - 14120.029: 85.1370% ( 84) 00:09:43.529 14120.029 - 14179.607: 85.7832% ( 67) 00:09:43.529 14179.607 - 14239.185: 86.4583% ( 70) 00:09:43.529 14239.185 - 14298.764: 87.1817% ( 75) 00:09:43.529 14298.764 - 14358.342: 87.8279% ( 67) 00:09:43.529 14358.342 - 14417.920: 88.4742% ( 67) 00:09:43.529 14417.920 - 14477.498: 89.1107% ( 66) 00:09:43.529 14477.498 - 14537.076: 89.5930% ( 50) 00:09:43.529 14537.076 - 14596.655: 90.0849% ( 51) 00:09:43.529 14596.655 - 14656.233: 90.5382% ( 47) 00:09:43.529 14656.233 - 14715.811: 91.0108% ( 49) 00:09:43.529 14715.811 - 14775.389: 91.4352% ( 44) 00:09:43.529 14775.389 - 14834.967: 91.9850% ( 57) 00:09:43.529 14834.967 - 14894.545: 92.4672% ( 50) 00:09:43.529 14894.545 - 14954.124: 92.8530% ( 40) 00:09:43.529 14954.124 - 15013.702: 93.2870% ( 45) 00:09:43.529 15013.702 - 15073.280: 93.7307% ( 46) 00:09:43.529 15073.280 - 15132.858: 94.1840% ( 47) 00:09:43.530 15132.858 - 15192.436: 94.5795% ( 41) 00:09:43.530 15192.436 - 15252.015: 94.9653% ( 40) 00:09:43.530 15252.015 - 15371.171: 95.6983% ( 76) 00:09:43.530 15371.171 - 15490.327: 96.3735% ( 70) 00:09:43.530 15490.327 - 15609.484: 96.8943% ( 54) 00:09:43.530 15609.484 - 15728.640: 97.3476% ( 47) 00:09:43.530 15728.640 - 15847.796: 97.8202% ( 49) 00:09:43.530 15847.796 - 15966.953: 98.2446% ( 44) 00:09:43.530 15966.953 - 16086.109: 98.4279% ( 19) 00:09:43.530 16086.109 - 16205.265: 98.5243% ( 10) 00:09:43.530 16205.265 - 16324.422: 98.5725% ( 5) 00:09:43.530 16324.422 - 16443.578: 98.6304% ( 6) 00:09:43.530 16443.578 - 16562.735: 98.6883% ( 6) 00:09:43.530 16562.735 - 16681.891: 98.7461% ( 6) 00:09:43.530 16681.891 - 16801.047: 98.7654% ( 2) 00:09:43.530 22758.865 - 22878.022: 98.7847% ( 2) 00:09:43.530 22878.022 - 22997.178: 98.8137% ( 3) 00:09:43.530 22997.178 - 23116.335: 98.8522% ( 4) 00:09:43.530 23116.335 - 23235.491: 98.8812% ( 3) 00:09:43.530 23235.491 - 23354.647: 98.9198% ( 4) 00:09:43.530 23354.647 - 23473.804: 98.9487% ( 3) 00:09:43.530 23473.804 - 23592.960: 98.9776% ( 3) 00:09:43.530 23592.960 - 23712.116: 99.0066% ( 3) 00:09:43.530 23712.116 - 23831.273: 99.0451% ( 4) 00:09:43.530 23831.273 - 23950.429: 99.0741% ( 3) 00:09:43.530 23950.429 - 24069.585: 99.1030% ( 3) 00:09:43.530 24069.585 - 24188.742: 99.1416% ( 4) 00:09:43.530 24188.742 - 24307.898: 99.1705% ( 3) 00:09:43.530 24307.898 - 24427.055: 99.1995% ( 3) 00:09:43.530 24427.055 - 24546.211: 99.2284% ( 3) 00:09:43.530 24546.211 - 24665.367: 99.2670% ( 4) 00:09:43.530 24665.367 - 24784.524: 99.2959% ( 3) 00:09:43.530 24784.524 - 24903.680: 99.3345% ( 4) 00:09:43.530 24903.680 - 25022.836: 99.3634% ( 3) 00:09:43.530 25022.836 - 25141.993: 99.3827% ( 2) 00:09:43.530 30504.029 - 30742.342: 99.3924% ( 1) 00:09:43.530 30742.342 - 30980.655: 99.4599% ( 7) 00:09:43.530 30980.655 - 31218.967: 99.5274% ( 7) 00:09:43.530 31218.967 - 31457.280: 99.5949% ( 7) 00:09:43.530 31457.280 - 31695.593: 99.6528% ( 6) 00:09:43.530 31695.593 - 31933.905: 99.7106% ( 6) 00:09:43.530 31933.905 - 32172.218: 99.7782% ( 7) 00:09:43.530 32172.218 - 32410.531: 99.8553% ( 8) 00:09:43.530 32410.531 - 32648.844: 99.9132% ( 6) 00:09:43.530 32648.844 - 32887.156: 99.9711% ( 6) 00:09:43.530 32887.156 - 33125.469: 100.0000% ( 3) 00:09:43.530 00:09:43.530 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:43.530 ============================================================================== 00:09:43.530 Range in us Cumulative IO count 00:09:43.530 9115.462 - 9175.040: 0.0096% ( 1) 00:09:43.530 9175.040 - 9234.618: 0.1254% ( 12) 00:09:43.530 9234.618 - 9294.196: 0.2797% ( 16) 00:09:43.530 9294.196 - 9353.775: 0.5401% ( 27) 00:09:43.530 9353.775 - 9413.353: 0.8102% ( 28) 00:09:43.530 9413.353 - 9472.931: 1.1381% ( 34) 00:09:43.530 9472.931 - 9532.509: 1.6204% ( 50) 00:09:43.530 9532.509 - 9592.087: 2.1508% ( 55) 00:09:43.530 9592.087 - 9651.665: 2.8549% ( 73) 00:09:43.530 9651.665 - 9711.244: 3.4047% ( 57) 00:09:43.530 9711.244 - 9770.822: 4.0895% ( 71) 00:09:43.530 9770.822 - 9830.400: 4.8804% ( 82) 00:09:43.530 9830.400 - 9889.978: 5.7002% ( 85) 00:09:43.530 9889.978 - 9949.556: 6.4333% ( 76) 00:09:43.530 9949.556 - 10009.135: 7.1663% ( 76) 00:09:43.530 10009.135 - 10068.713: 8.0150% ( 88) 00:09:43.530 10068.713 - 10128.291: 8.7481% ( 76) 00:09:43.530 10128.291 - 10187.869: 9.5100% ( 79) 00:09:43.530 10187.869 - 10247.447: 10.3781% ( 90) 00:09:43.530 10247.447 - 10307.025: 11.3329% ( 99) 00:09:43.530 10307.025 - 10366.604: 12.5965% ( 131) 00:09:43.530 10366.604 - 10426.182: 14.1782% ( 164) 00:09:43.530 10426.182 - 10485.760: 15.7697% ( 165) 00:09:43.530 10485.760 - 10545.338: 17.3225% ( 161) 00:09:43.530 10545.338 - 10604.916: 18.8657% ( 160) 00:09:43.530 10604.916 - 10664.495: 20.3318% ( 152) 00:09:43.530 10664.495 - 10724.073: 21.6242% ( 134) 00:09:43.530 10724.073 - 10783.651: 23.0710% ( 150) 00:09:43.530 10783.651 - 10843.229: 24.3924% ( 137) 00:09:43.530 10843.229 - 10902.807: 25.7620% ( 142) 00:09:43.530 10902.807 - 10962.385: 27.1894% ( 148) 00:09:43.530 10962.385 - 11021.964: 28.6748% ( 154) 00:09:43.530 11021.964 - 11081.542: 30.1601% ( 154) 00:09:43.530 11081.542 - 11141.120: 31.5683% ( 146) 00:09:43.530 11141.120 - 11200.698: 32.8800% ( 136) 00:09:43.530 11200.698 - 11260.276: 34.2689% ( 144) 00:09:43.530 11260.276 - 11319.855: 35.6289% ( 141) 00:09:43.530 11319.855 - 11379.433: 36.9599% ( 138) 00:09:43.530 11379.433 - 11439.011: 38.1848% ( 127) 00:09:43.530 11439.011 - 11498.589: 39.1782% ( 103) 00:09:43.530 11498.589 - 11558.167: 39.9595% ( 81) 00:09:43.530 11558.167 - 11617.745: 40.7311% ( 80) 00:09:43.530 11617.745 - 11677.324: 41.6281% ( 93) 00:09:43.530 11677.324 - 11736.902: 42.6119% ( 102) 00:09:43.530 11736.902 - 11796.480: 43.7982% ( 123) 00:09:43.530 11796.480 - 11856.058: 45.1968% ( 145) 00:09:43.530 11856.058 - 11915.636: 46.9425% ( 181) 00:09:43.530 11915.636 - 11975.215: 48.3410% ( 145) 00:09:43.530 11975.215 - 12034.793: 49.6528% ( 136) 00:09:43.530 12034.793 - 12094.371: 50.9549% ( 135) 00:09:43.530 12094.371 - 12153.949: 52.2377% ( 133) 00:09:43.530 12153.949 - 12213.527: 53.4915% ( 130) 00:09:43.530 12213.527 - 12273.105: 54.5621% ( 111) 00:09:43.530 12273.105 - 12332.684: 55.5556% ( 103) 00:09:43.530 12332.684 - 12392.262: 56.5201% ( 100) 00:09:43.530 12392.262 - 12451.840: 57.5521% ( 107) 00:09:43.530 12451.840 - 12511.418: 58.7384% ( 123) 00:09:43.530 12511.418 - 12570.996: 59.8476% ( 115) 00:09:43.530 12570.996 - 12630.575: 61.0532% ( 125) 00:09:43.530 12630.575 - 12690.153: 62.2975% ( 129) 00:09:43.530 12690.153 - 12749.731: 63.6863% ( 144) 00:09:43.530 12749.731 - 12809.309: 64.8052% ( 116) 00:09:43.530 12809.309 - 12868.887: 65.9047% ( 114) 00:09:43.530 12868.887 - 12928.465: 67.0235% ( 116) 00:09:43.530 12928.465 - 12988.044: 68.1809% ( 120) 00:09:43.530 12988.044 - 13047.622: 69.3287% ( 119) 00:09:43.530 13047.622 - 13107.200: 70.2739% ( 98) 00:09:43.530 13107.200 - 13166.778: 71.2384% ( 100) 00:09:43.530 13166.778 - 13226.356: 72.0968% ( 89) 00:09:43.530 13226.356 - 13285.935: 73.0421% ( 98) 00:09:43.530 13285.935 - 13345.513: 73.9390% ( 93) 00:09:43.530 13345.513 - 13405.091: 74.8650% ( 96) 00:09:43.530 13405.091 - 13464.669: 75.8391% ( 101) 00:09:43.530 13464.669 - 13524.247: 76.8808% ( 108) 00:09:43.530 13524.247 - 13583.825: 77.7874% ( 94) 00:09:43.530 13583.825 - 13643.404: 78.6362% ( 88) 00:09:43.530 13643.404 - 13702.982: 79.6296% ( 103) 00:09:43.530 13702.982 - 13762.560: 80.5556% ( 96) 00:09:43.530 13762.560 - 13822.138: 81.4525% ( 93) 00:09:43.530 13822.138 - 13881.716: 82.1373% ( 71) 00:09:43.530 13881.716 - 13941.295: 82.9090% ( 80) 00:09:43.530 13941.295 - 14000.873: 83.6130% ( 73) 00:09:43.530 14000.873 - 14060.451: 84.3075% ( 72) 00:09:43.530 14060.451 - 14120.029: 85.0309% ( 75) 00:09:43.530 14120.029 - 14179.607: 85.6867% ( 68) 00:09:43.530 14179.607 - 14239.185: 86.3040% ( 64) 00:09:43.530 14239.185 - 14298.764: 86.9406% ( 66) 00:09:43.530 14298.764 - 14358.342: 87.5386% ( 62) 00:09:43.530 14358.342 - 14417.920: 88.1752% ( 66) 00:09:43.530 14417.920 - 14477.498: 88.7442% ( 59) 00:09:43.530 14477.498 - 14537.076: 89.3133% ( 59) 00:09:43.530 14537.076 - 14596.655: 89.8341% ( 54) 00:09:43.530 14596.655 - 14656.233: 90.2874% ( 47) 00:09:43.530 14656.233 - 14715.811: 90.7407% ( 47) 00:09:43.530 14715.811 - 14775.389: 91.1458% ( 42) 00:09:43.530 14775.389 - 14834.967: 91.5702% ( 44) 00:09:43.530 14834.967 - 14894.545: 91.9367% ( 38) 00:09:43.530 14894.545 - 14954.124: 92.2936% ( 37) 00:09:43.530 14954.124 - 15013.702: 92.7855% ( 51) 00:09:43.530 15013.702 - 15073.280: 93.2195% ( 45) 00:09:43.530 15073.280 - 15132.858: 93.7018% ( 50) 00:09:43.530 15132.858 - 15192.436: 94.2226% ( 54) 00:09:43.530 15192.436 - 15252.015: 94.6566% ( 45) 00:09:43.530 15252.015 - 15371.171: 95.4090% ( 78) 00:09:43.530 15371.171 - 15490.327: 96.0938% ( 71) 00:09:43.530 15490.327 - 15609.484: 96.7303% ( 66) 00:09:43.530 15609.484 - 15728.640: 97.2512% ( 54) 00:09:43.530 15728.640 - 15847.796: 97.7141% ( 48) 00:09:43.530 15847.796 - 15966.953: 98.1096% ( 41) 00:09:43.530 15966.953 - 16086.109: 98.3410% ( 24) 00:09:43.530 16086.109 - 16205.265: 98.4471% ( 11) 00:09:43.530 16205.265 - 16324.422: 98.5340% ( 9) 00:09:43.530 16324.422 - 16443.578: 98.5918% ( 6) 00:09:43.530 16443.578 - 16562.735: 98.6304% ( 4) 00:09:43.530 16562.735 - 16681.891: 98.6883% ( 6) 00:09:43.530 16681.891 - 16801.047: 98.7461% ( 6) 00:09:43.530 16801.047 - 16920.204: 98.7654% ( 2) 00:09:43.530 21328.989 - 21448.145: 98.9101% ( 15) 00:09:43.530 21448.145 - 21567.302: 98.9776% ( 7) 00:09:43.530 21567.302 - 21686.458: 99.0066% ( 3) 00:09:43.530 21686.458 - 21805.615: 99.0258% ( 2) 00:09:43.530 21805.615 - 21924.771: 99.0451% ( 2) 00:09:43.530 21924.771 - 22043.927: 99.0644% ( 2) 00:09:43.530 22043.927 - 22163.084: 99.0934% ( 3) 00:09:43.530 22163.084 - 22282.240: 99.1223% ( 3) 00:09:43.530 22282.240 - 22401.396: 99.1512% ( 3) 00:09:43.530 22401.396 - 22520.553: 99.1802% ( 3) 00:09:43.530 22520.553 - 22639.709: 99.2091% ( 3) 00:09:43.530 22639.709 - 22758.865: 99.2380% ( 3) 00:09:43.530 22758.865 - 22878.022: 99.2766% ( 4) 00:09:43.530 22878.022 - 22997.178: 99.3056% ( 3) 00:09:43.530 22997.178 - 23116.335: 99.3441% ( 4) 00:09:43.530 23116.335 - 23235.491: 99.3827% ( 4) 00:09:43.530 28835.840 - 28954.996: 99.4020% ( 2) 00:09:43.530 28954.996 - 29074.153: 99.4309% ( 3) 00:09:43.530 29074.153 - 29193.309: 99.4695% ( 4) 00:09:43.530 29193.309 - 29312.465: 99.4985% ( 3) 00:09:43.530 29312.465 - 29431.622: 99.5274% ( 3) 00:09:43.531 29431.622 - 29550.778: 99.5563% ( 3) 00:09:43.531 29550.778 - 29669.935: 99.5853% ( 3) 00:09:43.531 29669.935 - 29789.091: 99.6142% ( 3) 00:09:43.531 29789.091 - 29908.247: 99.6528% ( 4) 00:09:43.531 29908.247 - 30027.404: 99.6817% ( 3) 00:09:43.531 30027.404 - 30146.560: 99.7106% ( 3) 00:09:43.531 30146.560 - 30265.716: 99.7492% ( 4) 00:09:43.531 30265.716 - 30384.873: 99.7782% ( 3) 00:09:43.531 30384.873 - 30504.029: 99.8071% ( 3) 00:09:43.531 30504.029 - 30742.342: 99.8746% ( 7) 00:09:43.531 30742.342 - 30980.655: 99.9325% ( 6) 00:09:43.531 30980.655 - 31218.967: 99.9904% ( 6) 00:09:43.531 31218.967 - 31457.280: 100.0000% ( 1) 00:09:43.531 00:09:43.531 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:43.531 ============================================================================== 00:09:43.531 Range in us Cumulative IO count 00:09:43.531 9055.884 - 9115.462: 0.0289% ( 3) 00:09:43.531 9115.462 - 9175.040: 0.0675% ( 4) 00:09:43.531 9175.040 - 9234.618: 0.1254% ( 6) 00:09:43.531 9234.618 - 9294.196: 0.2411% ( 12) 00:09:43.531 9294.196 - 9353.775: 0.5305% ( 30) 00:09:43.531 9353.775 - 9413.353: 0.8584% ( 34) 00:09:43.531 9413.353 - 9472.931: 1.2346% ( 39) 00:09:43.531 9472.931 - 9532.509: 1.6107% ( 39) 00:09:43.531 9532.509 - 9592.087: 2.1316% ( 54) 00:09:43.531 9592.087 - 9651.665: 2.6910% ( 58) 00:09:43.531 9651.665 - 9711.244: 3.3372% ( 67) 00:09:43.531 9711.244 - 9770.822: 4.1184% ( 81) 00:09:43.531 9770.822 - 9830.400: 4.7936% ( 70) 00:09:43.531 9830.400 - 9889.978: 5.5941% ( 83) 00:09:43.531 9889.978 - 9949.556: 6.3561% ( 79) 00:09:43.531 9949.556 - 10009.135: 7.1663% ( 84) 00:09:43.531 10009.135 - 10068.713: 7.9958% ( 86) 00:09:43.531 10068.713 - 10128.291: 8.8252% ( 86) 00:09:43.531 10128.291 - 10187.869: 9.7704% ( 98) 00:09:43.531 10187.869 - 10247.447: 10.7542% ( 102) 00:09:43.531 10247.447 - 10307.025: 11.7188% ( 100) 00:09:43.531 10307.025 - 10366.604: 12.9244% ( 125) 00:09:43.531 10366.604 - 10426.182: 14.2072% ( 133) 00:09:43.531 10426.182 - 10485.760: 15.7697% ( 162) 00:09:43.531 10485.760 - 10545.338: 17.3322% ( 162) 00:09:43.531 10545.338 - 10604.916: 18.8465% ( 157) 00:09:43.531 10604.916 - 10664.495: 20.4379% ( 165) 00:09:43.531 10664.495 - 10724.073: 21.9136% ( 153) 00:09:43.531 10724.073 - 10783.651: 23.2928% ( 143) 00:09:43.531 10783.651 - 10843.229: 24.6431% ( 140) 00:09:43.531 10843.229 - 10902.807: 25.8970% ( 130) 00:09:43.531 10902.807 - 10962.385: 27.2087% ( 136) 00:09:43.531 10962.385 - 11021.964: 28.7037% ( 155) 00:09:43.531 11021.964 - 11081.542: 30.3241% ( 168) 00:09:43.531 11081.542 - 11141.120: 31.8866% ( 162) 00:09:43.531 11141.120 - 11200.698: 33.2658% ( 143) 00:09:43.531 11200.698 - 11260.276: 34.6547% ( 144) 00:09:43.531 11260.276 - 11319.855: 36.0629% ( 146) 00:09:43.531 11319.855 - 11379.433: 37.1721% ( 115) 00:09:43.531 11379.433 - 11439.011: 38.2234% ( 109) 00:09:43.531 11439.011 - 11498.589: 39.1782% ( 99) 00:09:43.531 11498.589 - 11558.167: 40.0752% ( 93) 00:09:43.531 11558.167 - 11617.745: 41.0397% ( 100) 00:09:43.531 11617.745 - 11677.324: 42.2454% ( 125) 00:09:43.531 11677.324 - 11736.902: 43.3738% ( 117) 00:09:43.531 11736.902 - 11796.480: 44.4348% ( 110) 00:09:43.531 11796.480 - 11856.058: 45.5922% ( 120) 00:09:43.531 11856.058 - 11915.636: 46.8171% ( 127) 00:09:43.531 11915.636 - 11975.215: 48.1385% ( 137) 00:09:43.531 11975.215 - 12034.793: 49.3634% ( 127) 00:09:43.531 12034.793 - 12094.371: 50.6269% ( 131) 00:09:43.531 12094.371 - 12153.949: 51.8711% ( 129) 00:09:43.531 12153.949 - 12213.527: 52.9707% ( 114) 00:09:43.531 12213.527 - 12273.105: 53.9834% ( 105) 00:09:43.531 12273.105 - 12332.684: 55.0154% ( 107) 00:09:43.531 12332.684 - 12392.262: 56.2211% ( 125) 00:09:43.531 12392.262 - 12451.840: 57.5424% ( 137) 00:09:43.531 12451.840 - 12511.418: 58.8638% ( 137) 00:09:43.531 12511.418 - 12570.996: 60.1755% ( 136) 00:09:43.531 12570.996 - 12630.575: 61.4873% ( 136) 00:09:43.531 12630.575 - 12690.153: 62.7508% ( 131) 00:09:43.531 12690.153 - 12749.731: 63.8985% ( 119) 00:09:43.531 12749.731 - 12809.309: 65.0559% ( 120) 00:09:43.531 12809.309 - 12868.887: 66.1748% ( 116) 00:09:43.531 12868.887 - 12928.465: 67.2647% ( 113) 00:09:43.531 12928.465 - 12988.044: 68.3160% ( 109) 00:09:43.531 12988.044 - 13047.622: 69.3576% ( 108) 00:09:43.531 13047.622 - 13107.200: 70.2450% ( 92) 00:09:43.531 13107.200 - 13166.778: 71.1613% ( 95) 00:09:43.531 13166.778 - 13226.356: 72.1740% ( 105) 00:09:43.531 13226.356 - 13285.935: 73.2446% ( 111) 00:09:43.531 13285.935 - 13345.513: 74.1898% ( 98) 00:09:43.531 13345.513 - 13405.091: 75.0675% ( 91) 00:09:43.531 13405.091 - 13464.669: 75.8584% ( 82) 00:09:43.531 13464.669 - 13524.247: 76.6686% ( 84) 00:09:43.531 13524.247 - 13583.825: 77.5656% ( 93) 00:09:43.531 13583.825 - 13643.404: 78.5108% ( 98) 00:09:43.531 13643.404 - 13702.982: 79.4560% ( 98) 00:09:43.531 13702.982 - 13762.560: 80.2855% ( 86) 00:09:43.531 13762.560 - 13822.138: 81.1053% ( 85) 00:09:43.531 13822.138 - 13881.716: 81.9734% ( 90) 00:09:43.531 13881.716 - 13941.295: 82.7739% ( 83) 00:09:43.531 13941.295 - 14000.873: 83.5455% ( 80) 00:09:43.531 14000.873 - 14060.451: 84.2110% ( 69) 00:09:43.531 14060.451 - 14120.029: 84.8958% ( 71) 00:09:43.531 14120.029 - 14179.607: 85.4552% ( 58) 00:09:43.531 14179.607 - 14239.185: 85.9664% ( 53) 00:09:43.531 14239.185 - 14298.764: 86.6127% ( 67) 00:09:43.531 14298.764 - 14358.342: 87.1914% ( 60) 00:09:43.531 14358.342 - 14417.920: 87.7315% ( 56) 00:09:43.531 14417.920 - 14477.498: 88.2427% ( 53) 00:09:43.531 14477.498 - 14537.076: 88.7924% ( 57) 00:09:43.531 14537.076 - 14596.655: 89.3904% ( 62) 00:09:43.531 14596.655 - 14656.233: 89.9016% ( 53) 00:09:43.531 14656.233 - 14715.811: 90.3549% ( 47) 00:09:43.531 14715.811 - 14775.389: 90.8372% ( 50) 00:09:43.531 14775.389 - 14834.967: 91.2616% ( 44) 00:09:43.531 14834.967 - 14894.545: 91.5895% ( 34) 00:09:43.531 14894.545 - 14954.124: 92.0332% ( 46) 00:09:43.531 14954.124 - 15013.702: 92.4383% ( 42) 00:09:43.531 15013.702 - 15073.280: 92.9784% ( 56) 00:09:43.531 15073.280 - 15132.858: 93.5282% ( 57) 00:09:43.531 15132.858 - 15192.436: 93.9333% ( 42) 00:09:43.531 15192.436 - 15252.015: 94.4348% ( 52) 00:09:43.531 15252.015 - 15371.171: 95.2932% ( 89) 00:09:43.531 15371.171 - 15490.327: 96.0262% ( 76) 00:09:43.531 15490.327 - 15609.484: 96.6435% ( 64) 00:09:43.531 15609.484 - 15728.640: 97.2222% ( 60) 00:09:43.531 15728.640 - 15847.796: 97.6370% ( 43) 00:09:43.531 15847.796 - 15966.953: 98.0710% ( 45) 00:09:43.531 15966.953 - 16086.109: 98.3121% ( 25) 00:09:43.531 16086.109 - 16205.265: 98.4761% ( 17) 00:09:43.531 16205.265 - 16324.422: 98.5532% ( 8) 00:09:43.531 16324.422 - 16443.578: 98.6111% ( 6) 00:09:43.531 16443.578 - 16562.735: 98.6690% ( 6) 00:09:43.531 16562.735 - 16681.891: 98.7365% ( 7) 00:09:43.531 16681.891 - 16801.047: 98.7654% ( 3) 00:09:43.531 19422.487 - 19541.644: 98.7751% ( 1) 00:09:43.531 19541.644 - 19660.800: 98.8619% ( 9) 00:09:43.531 19660.800 - 19779.956: 98.9776% ( 12) 00:09:43.531 19779.956 - 19899.113: 99.0066% ( 3) 00:09:43.531 19899.113 - 20018.269: 99.0355% ( 3) 00:09:43.531 20018.269 - 20137.425: 99.0644% ( 3) 00:09:43.531 20137.425 - 20256.582: 99.0837% ( 2) 00:09:43.531 20256.582 - 20375.738: 99.1030% ( 2) 00:09:43.531 20375.738 - 20494.895: 99.1319% ( 3) 00:09:43.531 20494.895 - 20614.051: 99.1609% ( 3) 00:09:43.531 20614.051 - 20733.207: 99.1898% ( 3) 00:09:43.531 20733.207 - 20852.364: 99.2188% ( 3) 00:09:43.531 20852.364 - 20971.520: 99.2573% ( 4) 00:09:43.531 20971.520 - 21090.676: 99.2863% ( 3) 00:09:43.531 21090.676 - 21209.833: 99.3152% ( 3) 00:09:43.531 21209.833 - 21328.989: 99.3538% ( 4) 00:09:43.531 21328.989 - 21448.145: 99.3827% ( 3) 00:09:43.531 27048.495 - 27167.651: 99.4020% ( 2) 00:09:43.531 27167.651 - 27286.807: 99.4213% ( 2) 00:09:43.531 27286.807 - 27405.964: 99.4599% ( 4) 00:09:43.531 27405.964 - 27525.120: 99.4888% ( 3) 00:09:43.531 27525.120 - 27644.276: 99.5177% ( 3) 00:09:43.531 27644.276 - 27763.433: 99.5563% ( 4) 00:09:43.531 27763.433 - 27882.589: 99.5853% ( 3) 00:09:43.531 27882.589 - 28001.745: 99.6142% ( 3) 00:09:43.531 28001.745 - 28120.902: 99.6528% ( 4) 00:09:43.532 28120.902 - 28240.058: 99.6817% ( 3) 00:09:43.532 28240.058 - 28359.215: 99.7106% ( 3) 00:09:43.532 28359.215 - 28478.371: 99.7492% ( 4) 00:09:43.532 28478.371 - 28597.527: 99.7782% ( 3) 00:09:43.532 28597.527 - 28716.684: 99.8167% ( 4) 00:09:43.532 28716.684 - 28835.840: 99.8457% ( 3) 00:09:43.532 28835.840 - 28954.996: 99.8746% ( 3) 00:09:43.532 28954.996 - 29074.153: 99.9035% ( 3) 00:09:43.532 29074.153 - 29193.309: 99.9421% ( 4) 00:09:43.532 29193.309 - 29312.465: 99.9711% ( 3) 00:09:43.532 29312.465 - 29431.622: 100.0000% ( 3) 00:09:43.532 00:09:43.532 18:54:14 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:43.532 00:09:43.532 real 0m2.764s 00:09:43.532 user 0m2.340s 00:09:43.532 sys 0m0.310s 00:09:43.532 18:54:14 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.532 ************************************ 00:09:43.532 END TEST nvme_perf 00:09:43.532 ************************************ 00:09:43.532 18:54:14 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:43.532 18:54:14 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:43.532 18:54:14 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.532 18:54:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.532 18:54:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.532 ************************************ 00:09:43.532 START TEST nvme_hello_world 00:09:43.532 ************************************ 00:09:43.532 18:54:14 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:44.099 Initializing NVMe Controllers 00:09:44.099 Attached to 0000:00:10.0 00:09:44.099 Namespace ID: 1 size: 6GB 00:09:44.099 Attached to 0000:00:11.0 00:09:44.099 Namespace ID: 1 size: 5GB 00:09:44.099 Attached to 0000:00:13.0 00:09:44.099 Namespace ID: 1 size: 1GB 00:09:44.099 Attached to 0000:00:12.0 00:09:44.099 Namespace ID: 1 size: 4GB 00:09:44.099 Namespace ID: 2 size: 4GB 00:09:44.099 Namespace ID: 3 size: 4GB 00:09:44.099 Initialization complete. 00:09:44.099 INFO: using host memory buffer for IO 00:09:44.099 Hello world! 00:09:44.099 INFO: using host memory buffer for IO 00:09:44.099 Hello world! 00:09:44.099 INFO: using host memory buffer for IO 00:09:44.099 Hello world! 00:09:44.099 INFO: using host memory buffer for IO 00:09:44.099 Hello world! 00:09:44.099 INFO: using host memory buffer for IO 00:09:44.099 Hello world! 00:09:44.099 INFO: using host memory buffer for IO 00:09:44.099 Hello world! 00:09:44.099 00:09:44.099 real 0m0.348s 00:09:44.099 user 0m0.141s 00:09:44.099 sys 0m0.154s 00:09:44.099 ************************************ 00:09:44.099 END TEST nvme_hello_world 00:09:44.099 ************************************ 00:09:44.099 18:54:15 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.099 18:54:15 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 18:54:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:44.099 18:54:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.099 18:54:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.099 18:54:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.099 ************************************ 00:09:44.099 START TEST nvme_sgl 00:09:44.099 ************************************ 00:09:44.099 18:54:15 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:44.356 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:44.356 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:44.356 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:44.356 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:44.356 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:44.356 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:44.356 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:44.356 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:44.356 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:44.356 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:44.356 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:44.356 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:44.356 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:44.356 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:44.356 NVMe Readv/Writev Request test 00:09:44.356 Attached to 0000:00:10.0 00:09:44.356 Attached to 0000:00:11.0 00:09:44.356 Attached to 0000:00:13.0 00:09:44.356 Attached to 0000:00:12.0 00:09:44.356 0000:00:10.0: build_io_request_2 test passed 00:09:44.356 0000:00:10.0: build_io_request_4 test passed 00:09:44.356 0000:00:10.0: build_io_request_5 test passed 00:09:44.356 0000:00:10.0: build_io_request_6 test passed 00:09:44.356 0000:00:10.0: build_io_request_7 test passed 00:09:44.356 0000:00:10.0: build_io_request_10 test passed 00:09:44.356 0000:00:11.0: build_io_request_2 test passed 00:09:44.356 0000:00:11.0: build_io_request_4 test passed 00:09:44.357 0000:00:11.0: build_io_request_5 test passed 00:09:44.357 0000:00:11.0: build_io_request_6 test passed 00:09:44.357 0000:00:11.0: build_io_request_7 test passed 00:09:44.357 0000:00:11.0: build_io_request_10 test passed 00:09:44.357 Cleaning up... 00:09:44.357 ************************************ 00:09:44.357 END TEST nvme_sgl 00:09:44.357 ************************************ 00:09:44.357 00:09:44.357 real 0m0.399s 00:09:44.357 user 0m0.205s 00:09:44.357 sys 0m0.142s 00:09:44.357 18:54:15 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.357 18:54:15 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:44.357 18:54:15 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:44.357 18:54:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.357 18:54:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.357 18:54:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.357 ************************************ 00:09:44.357 START TEST nvme_e2edp 00:09:44.357 ************************************ 00:09:44.357 18:54:15 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:44.923 NVMe Write/Read with End-to-End data protection test 00:09:44.924 Attached to 0000:00:10.0 00:09:44.924 Attached to 0000:00:11.0 00:09:44.924 Attached to 0000:00:13.0 00:09:44.924 Attached to 0000:00:12.0 00:09:44.924 Cleaning up... 00:09:44.924 ************************************ 00:09:44.924 END TEST nvme_e2edp 00:09:44.924 ************************************ 00:09:44.924 00:09:44.924 real 0m0.341s 00:09:44.924 user 0m0.132s 00:09:44.924 sys 0m0.164s 00:09:44.924 18:54:15 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.924 18:54:15 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:44.924 18:54:15 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:44.924 18:54:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.924 18:54:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.924 18:54:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.924 ************************************ 00:09:44.924 START TEST nvme_reserve 00:09:44.924 ************************************ 00:09:44.924 18:54:15 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:45.182 ===================================================== 00:09:45.182 NVMe Controller at PCI bus 0, device 16, function 0 00:09:45.182 ===================================================== 00:09:45.182 Reservations: Not Supported 00:09:45.182 ===================================================== 00:09:45.182 NVMe Controller at PCI bus 0, device 17, function 0 00:09:45.182 ===================================================== 00:09:45.182 Reservations: Not Supported 00:09:45.182 ===================================================== 00:09:45.182 NVMe Controller at PCI bus 0, device 19, function 0 00:09:45.182 ===================================================== 00:09:45.182 Reservations: Not Supported 00:09:45.182 ===================================================== 00:09:45.182 NVMe Controller at PCI bus 0, device 18, function 0 00:09:45.182 ===================================================== 00:09:45.182 Reservations: Not Supported 00:09:45.182 Reservation test passed 00:09:45.182 00:09:45.182 real 0m0.285s 00:09:45.182 user 0m0.109s 00:09:45.182 sys 0m0.134s 00:09:45.182 ************************************ 00:09:45.182 END TEST nvme_reserve 00:09:45.182 ************************************ 00:09:45.182 18:54:16 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.182 18:54:16 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 18:54:16 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:45.182 18:54:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.182 18:54:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.182 18:54:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 ************************************ 00:09:45.182 START TEST nvme_err_injection 00:09:45.182 ************************************ 00:09:45.182 18:54:16 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:45.442 NVMe Error Injection test 00:09:45.442 Attached to 0000:00:10.0 00:09:45.442 Attached to 0000:00:11.0 00:09:45.442 Attached to 0000:00:13.0 00:09:45.442 Attached to 0000:00:12.0 00:09:45.442 0000:00:12.0: get features failed as expected 00:09:45.442 0000:00:10.0: get features failed as expected 00:09:45.442 0000:00:11.0: get features failed as expected 00:09:45.442 0000:00:13.0: get features failed as expected 00:09:45.442 0000:00:10.0: get features successfully as expected 00:09:45.442 0000:00:11.0: get features successfully as expected 00:09:45.442 0000:00:13.0: get features successfully as expected 00:09:45.442 0000:00:12.0: get features successfully as expected 00:09:45.442 0000:00:10.0: read failed as expected 00:09:45.442 0000:00:11.0: read failed as expected 00:09:45.442 0000:00:13.0: read failed as expected 00:09:45.442 0000:00:12.0: read failed as expected 00:09:45.442 0000:00:10.0: read successfully as expected 00:09:45.442 0000:00:11.0: read successfully as expected 00:09:45.442 0000:00:13.0: read successfully as expected 00:09:45.442 0000:00:12.0: read successfully as expected 00:09:45.442 Cleaning up... 00:09:45.442 ************************************ 00:09:45.442 END TEST nvme_err_injection 00:09:45.442 ************************************ 00:09:45.442 00:09:45.442 real 0m0.320s 00:09:45.442 user 0m0.135s 00:09:45.442 sys 0m0.132s 00:09:45.442 18:54:16 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.442 18:54:16 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:45.442 18:54:16 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:45.442 18:54:16 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:45.442 18:54:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.442 18:54:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.442 ************************************ 00:09:45.442 START TEST nvme_overhead 00:09:45.442 ************************************ 00:09:45.442 18:54:16 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:46.900 Initializing NVMe Controllers 00:09:46.900 Attached to 0000:00:10.0 00:09:46.900 Attached to 0000:00:11.0 00:09:46.900 Attached to 0000:00:13.0 00:09:46.900 Attached to 0000:00:12.0 00:09:46.900 Initialization complete. Launching workers. 00:09:46.900 submit (in ns) avg, min, max = 16212.5, 12700.5, 114806.8 00:09:46.900 complete (in ns) avg, min, max = 10892.5, 9342.3, 199126.8 00:09:46.900 00:09:46.900 Submit histogram 00:09:46.900 ================ 00:09:46.900 Range in us Cumulative Count 00:09:46.900 12.684 - 12.742: 0.0172% ( 2) 00:09:46.900 12.858 - 12.916: 0.0515% ( 4) 00:09:46.900 12.975 - 13.033: 0.0601% ( 1) 00:09:46.900 14.196 - 14.255: 0.0687% ( 1) 00:09:46.900 14.255 - 14.313: 0.0773% ( 1) 00:09:46.900 14.313 - 14.371: 0.1460% ( 8) 00:09:46.900 14.371 - 14.429: 0.3952% ( 29) 00:09:46.900 14.429 - 14.487: 1.6495% ( 146) 00:09:46.900 14.487 - 14.545: 5.1031% ( 402) 00:09:46.900 14.545 - 14.604: 10.9536% ( 681) 00:09:46.900 14.604 - 14.662: 18.7715% ( 910) 00:09:46.900 14.662 - 14.720: 26.1512% ( 859) 00:09:46.900 14.720 - 14.778: 32.8522% ( 780) 00:09:46.900 14.778 - 14.836: 38.6684% ( 677) 00:09:46.900 14.836 - 14.895: 43.4278% ( 554) 00:09:46.900 14.895 - 15.011: 50.9278% ( 873) 00:09:46.900 15.011 - 15.127: 55.9794% ( 588) 00:09:46.900 15.127 - 15.244: 59.3729% ( 395) 00:09:46.900 15.244 - 15.360: 61.4261% ( 239) 00:09:46.900 15.360 - 15.476: 62.6289% ( 140) 00:09:46.900 15.476 - 15.593: 63.7027% ( 125) 00:09:46.900 15.593 - 15.709: 64.3127% ( 71) 00:09:46.900 15.709 - 15.825: 64.7251% ( 48) 00:09:46.900 15.825 - 15.942: 65.0515% ( 38) 00:09:46.900 15.942 - 16.058: 65.3608% ( 36) 00:09:46.900 16.058 - 16.175: 65.6701% ( 36) 00:09:46.900 16.175 - 16.291: 65.9966% ( 38) 00:09:46.900 16.291 - 16.407: 66.6237% ( 73) 00:09:46.900 16.407 - 16.524: 67.5773% ( 111) 00:09:46.900 16.524 - 16.640: 68.4708% ( 104) 00:09:46.900 16.640 - 16.756: 69.1838% ( 83) 00:09:46.900 16.756 - 16.873: 69.5275% ( 40) 00:09:46.900 16.873 - 16.989: 69.8110% ( 33) 00:09:46.900 16.989 - 17.105: 69.9570% ( 17) 00:09:46.900 17.105 - 17.222: 70.2062% ( 29) 00:09:46.900 17.222 - 17.338: 71.4605% ( 146) 00:09:46.900 17.338 - 17.455: 74.9485% ( 406) 00:09:46.900 17.455 - 17.571: 80.8505% ( 687) 00:09:46.900 17.571 - 17.687: 85.3007% ( 518) 00:09:46.900 17.687 - 17.804: 87.4055% ( 245) 00:09:46.900 17.804 - 17.920: 88.7629% ( 158) 00:09:46.900 17.920 - 18.036: 89.7079% ( 110) 00:09:46.900 18.036 - 18.153: 90.5326% ( 96) 00:09:46.900 18.153 - 18.269: 91.2027% ( 78) 00:09:46.900 18.269 - 18.385: 91.7784% ( 67) 00:09:46.900 18.385 - 18.502: 92.1649% ( 45) 00:09:46.900 18.502 - 18.618: 92.4742% ( 36) 00:09:46.900 18.618 - 18.735: 92.7491% ( 32) 00:09:46.900 18.735 - 18.851: 92.9897% ( 28) 00:09:46.900 18.851 - 18.967: 93.1271% ( 16) 00:09:46.900 18.967 - 19.084: 93.2990% ( 20) 00:09:46.900 19.084 - 19.200: 93.4278% ( 15) 00:09:46.900 19.200 - 19.316: 93.5395% ( 13) 00:09:46.900 19.316 - 19.433: 93.6254% ( 10) 00:09:46.900 19.433 - 19.549: 93.7199% ( 11) 00:09:46.900 19.549 - 19.665: 93.8488% ( 15) 00:09:46.900 19.665 - 19.782: 93.9519% ( 12) 00:09:46.900 19.782 - 19.898: 94.0378% ( 10) 00:09:46.900 19.898 - 20.015: 94.0722% ( 4) 00:09:46.900 20.015 - 20.131: 94.1667% ( 11) 00:09:46.900 20.131 - 20.247: 94.2526% ( 10) 00:09:46.900 20.247 - 20.364: 94.2955% ( 5) 00:09:46.900 20.364 - 20.480: 94.3557% ( 7) 00:09:46.900 20.480 - 20.596: 94.4072% ( 6) 00:09:46.900 20.596 - 20.713: 94.4931% ( 10) 00:09:46.900 20.713 - 20.829: 94.5962% ( 12) 00:09:46.900 20.829 - 20.945: 94.6821% ( 10) 00:09:46.900 20.945 - 21.062: 94.7852% ( 12) 00:09:46.900 21.062 - 21.178: 94.8883% ( 12) 00:09:46.900 21.178 - 21.295: 95.0000% ( 13) 00:09:46.900 21.295 - 21.411: 95.0773% ( 9) 00:09:46.900 21.411 - 21.527: 95.2148% ( 16) 00:09:46.900 21.527 - 21.644: 95.3007% ( 10) 00:09:46.900 21.644 - 21.760: 95.4038% ( 12) 00:09:46.900 21.760 - 21.876: 95.4639% ( 7) 00:09:46.900 21.876 - 21.993: 95.5069% ( 5) 00:09:46.900 21.993 - 22.109: 95.6014% ( 11) 00:09:46.900 22.109 - 22.225: 95.7474% ( 17) 00:09:46.900 22.225 - 22.342: 95.8591% ( 13) 00:09:46.900 22.342 - 22.458: 95.9708% ( 13) 00:09:46.900 22.458 - 22.575: 96.0825% ( 13) 00:09:46.900 22.575 - 22.691: 96.1856% ( 12) 00:09:46.900 22.691 - 22.807: 96.2973% ( 13) 00:09:46.900 22.807 - 22.924: 96.3746% ( 9) 00:09:46.900 22.924 - 23.040: 96.4948% ( 14) 00:09:46.900 23.040 - 23.156: 96.5893% ( 11) 00:09:46.900 23.156 - 23.273: 96.6924% ( 12) 00:09:46.900 23.273 - 23.389: 96.8127% ( 14) 00:09:46.900 23.389 - 23.505: 96.9072% ( 11) 00:09:46.900 23.505 - 23.622: 96.9502% ( 5) 00:09:46.900 23.622 - 23.738: 97.0962% ( 17) 00:09:46.900 23.738 - 23.855: 97.1821% ( 10) 00:09:46.900 23.855 - 23.971: 97.3024% ( 14) 00:09:46.900 23.971 - 24.087: 97.4570% ( 18) 00:09:46.900 24.087 - 24.204: 97.5515% ( 11) 00:09:46.900 24.204 - 24.320: 97.6718% ( 14) 00:09:46.900 24.320 - 24.436: 97.7663% ( 11) 00:09:46.900 24.436 - 24.553: 97.8522% ( 10) 00:09:46.900 24.553 - 24.669: 97.9210% ( 8) 00:09:46.900 24.669 - 24.785: 98.0326% ( 13) 00:09:46.900 24.785 - 24.902: 98.0670% ( 4) 00:09:46.900 24.902 - 25.018: 98.1271% ( 7) 00:09:46.900 25.018 - 25.135: 98.2131% ( 10) 00:09:46.900 25.135 - 25.251: 98.2904% ( 9) 00:09:46.900 25.251 - 25.367: 98.3419% ( 6) 00:09:46.900 25.367 - 25.484: 98.4192% ( 9) 00:09:46.900 25.484 - 25.600: 98.4794% ( 7) 00:09:46.900 25.600 - 25.716: 98.5309% ( 6) 00:09:46.900 25.716 - 25.833: 98.5739% ( 5) 00:09:46.900 25.833 - 25.949: 98.6426% ( 8) 00:09:46.900 25.949 - 26.065: 98.7027% ( 7) 00:09:46.900 26.065 - 26.182: 98.7801% ( 9) 00:09:46.900 26.182 - 26.298: 98.8402% ( 7) 00:09:46.900 26.298 - 26.415: 98.8832% ( 5) 00:09:46.900 26.415 - 26.531: 98.9777% ( 11) 00:09:46.900 26.531 - 26.647: 98.9948% ( 2) 00:09:46.900 26.647 - 26.764: 99.0378% ( 5) 00:09:46.900 26.764 - 26.880: 99.0808% ( 5) 00:09:46.900 26.880 - 26.996: 99.1151% ( 4) 00:09:46.900 26.996 - 27.113: 99.1753% ( 7) 00:09:46.900 27.113 - 27.229: 99.1924% ( 2) 00:09:46.900 27.229 - 27.345: 99.2182% ( 3) 00:09:46.900 27.462 - 27.578: 99.2440% ( 3) 00:09:46.900 27.578 - 27.695: 99.3127% ( 8) 00:09:46.900 27.695 - 27.811: 99.3471% ( 4) 00:09:46.900 27.811 - 27.927: 99.3557% ( 1) 00:09:46.900 27.927 - 28.044: 99.3643% ( 1) 00:09:46.900 28.044 - 28.160: 99.3900% ( 3) 00:09:46.900 28.160 - 28.276: 99.4158% ( 3) 00:09:46.900 28.276 - 28.393: 99.4416% ( 3) 00:09:46.900 28.393 - 28.509: 99.4674% ( 3) 00:09:46.900 28.509 - 28.625: 99.4931% ( 3) 00:09:46.900 28.625 - 28.742: 99.5103% ( 2) 00:09:46.900 28.742 - 28.858: 99.5275% ( 2) 00:09:46.900 28.858 - 28.975: 99.5447% ( 2) 00:09:46.900 29.091 - 29.207: 99.5533% ( 1) 00:09:46.900 29.207 - 29.324: 99.5704% ( 2) 00:09:46.900 29.324 - 29.440: 99.5790% ( 1) 00:09:46.900 29.440 - 29.556: 99.5876% ( 1) 00:09:46.900 29.556 - 29.673: 99.6306% ( 5) 00:09:46.900 29.673 - 29.789: 99.6392% ( 1) 00:09:46.900 29.789 - 30.022: 99.6735% ( 4) 00:09:46.900 30.022 - 30.255: 99.6821% ( 1) 00:09:46.900 30.255 - 30.487: 99.6993% ( 2) 00:09:46.900 30.487 - 30.720: 99.7251% ( 3) 00:09:46.900 30.720 - 30.953: 99.7423% ( 2) 00:09:46.900 30.953 - 31.185: 99.7509% ( 1) 00:09:46.900 31.185 - 31.418: 99.7766% ( 3) 00:09:46.900 31.418 - 31.651: 99.7938% ( 2) 00:09:46.900 31.651 - 31.884: 99.8110% ( 2) 00:09:46.900 32.349 - 32.582: 99.8196% ( 1) 00:09:46.900 32.815 - 33.047: 99.8282% ( 1) 00:09:46.900 33.047 - 33.280: 99.8368% ( 1) 00:09:46.900 33.745 - 33.978: 99.8454% ( 1) 00:09:46.900 33.978 - 34.211: 99.8540% ( 1) 00:09:46.900 34.211 - 34.444: 99.8625% ( 1) 00:09:46.900 34.676 - 34.909: 99.8711% ( 1) 00:09:46.900 35.607 - 35.840: 99.8797% ( 1) 00:09:46.900 35.840 - 36.073: 99.8883% ( 1) 00:09:46.900 36.538 - 36.771: 99.8969% ( 1) 00:09:46.900 39.564 - 39.796: 99.9055% ( 1) 00:09:46.900 40.029 - 40.262: 99.9141% ( 1) 00:09:46.900 40.960 - 41.193: 99.9227% ( 1) 00:09:46.900 41.193 - 41.425: 99.9313% ( 1) 00:09:46.900 45.847 - 46.080: 99.9399% ( 1) 00:09:46.900 52.829 - 53.062: 99.9485% ( 1) 00:09:46.900 54.458 - 54.691: 99.9570% ( 1) 00:09:46.900 58.647 - 58.880: 99.9656% ( 1) 00:09:46.900 92.160 - 92.625: 99.9742% ( 1) 00:09:46.900 102.865 - 103.331: 99.9828% ( 1) 00:09:46.900 105.193 - 105.658: 99.9914% ( 1) 00:09:46.900 114.502 - 114.967: 100.0000% ( 1) 00:09:46.900 00:09:46.900 Complete histogram 00:09:46.900 ================== 00:09:46.900 Range in us Cumulative Count 00:09:46.900 9.309 - 9.367: 0.1117% ( 13) 00:09:46.900 9.367 - 9.425: 2.1821% ( 241) 00:09:46.900 9.425 - 9.484: 10.1460% ( 927) 00:09:46.900 9.484 - 9.542: 22.8093% ( 1474) 00:09:46.900 9.542 - 9.600: 35.3007% ( 1454) 00:09:46.900 9.600 - 9.658: 44.8540% ( 1112) 00:09:46.900 9.658 - 9.716: 51.5464% ( 779) 00:09:46.900 9.716 - 9.775: 55.6615% ( 479) 00:09:46.900 9.775 - 9.833: 58.3677% ( 315) 00:09:46.900 9.833 - 9.891: 59.6907% ( 154) 00:09:46.900 9.891 - 9.949: 60.6357% ( 110) 00:09:46.900 9.949 - 10.007: 61.2113% ( 67) 00:09:46.900 10.007 - 10.065: 61.6753% ( 54) 00:09:46.900 10.065 - 10.124: 62.0361% ( 42) 00:09:46.900 10.124 - 10.182: 62.3196% ( 33) 00:09:46.900 10.182 - 10.240: 62.5000% ( 21) 00:09:46.900 10.240 - 10.298: 62.7663% ( 31) 00:09:46.900 10.298 - 10.356: 63.1615% ( 46) 00:09:46.900 10.356 - 10.415: 63.5997% ( 51) 00:09:46.900 10.415 - 10.473: 64.2526% ( 76) 00:09:46.900 10.473 - 10.531: 64.8540% ( 70) 00:09:46.900 10.531 - 10.589: 65.3866% ( 62) 00:09:46.900 10.589 - 10.647: 65.7732% ( 45) 00:09:46.900 10.647 - 10.705: 66.0739% ( 35) 00:09:46.900 10.705 - 10.764: 66.2629% ( 22) 00:09:46.900 10.764 - 10.822: 66.4691% ( 24) 00:09:46.900 10.822 - 10.880: 66.6151% ( 17) 00:09:46.900 10.880 - 10.938: 66.6753% ( 7) 00:09:46.900 10.938 - 10.996: 66.7268% ( 6) 00:09:46.900 10.996 - 11.055: 66.7784% ( 6) 00:09:46.900 11.055 - 11.113: 66.8986% ( 14) 00:09:46.900 11.113 - 11.171: 67.0619% ( 19) 00:09:46.900 11.171 - 11.229: 67.1392% ( 9) 00:09:46.900 11.229 - 11.287: 67.2165% ( 9) 00:09:46.900 11.287 - 11.345: 67.2423% ( 3) 00:09:46.900 11.345 - 11.404: 67.5773% ( 39) 00:09:46.900 11.404 - 11.462: 69.0979% ( 177) 00:09:46.900 11.462 - 11.520: 72.6718% ( 416) 00:09:46.900 11.520 - 11.578: 77.1649% ( 523) 00:09:46.900 11.578 - 11.636: 80.9966% ( 446) 00:09:46.900 11.636 - 11.695: 83.8832% ( 336) 00:09:46.900 11.695 - 11.753: 85.5241% ( 191) 00:09:46.900 11.753 - 11.811: 86.4691% ( 110) 00:09:46.900 11.811 - 11.869: 87.1392% ( 78) 00:09:46.900 11.869 - 11.927: 87.4828% ( 40) 00:09:46.900 11.927 - 11.985: 87.7577% ( 32) 00:09:46.900 11.985 - 12.044: 87.9467% ( 22) 00:09:46.900 12.044 - 12.102: 88.0584% ( 13) 00:09:46.900 12.102 - 12.160: 88.1615% ( 12) 00:09:46.900 12.160 - 12.218: 88.3076% ( 17) 00:09:46.900 12.218 - 12.276: 88.3849% ( 9) 00:09:46.900 12.276 - 12.335: 88.5137% ( 15) 00:09:46.900 12.335 - 12.393: 88.6082% ( 11) 00:09:46.900 12.393 - 12.451: 88.7371% ( 15) 00:09:46.900 12.451 - 12.509: 88.9605% ( 26) 00:09:46.900 12.509 - 12.567: 89.2440% ( 33) 00:09:46.900 12.567 - 12.625: 89.5361% ( 34) 00:09:46.900 12.625 - 12.684: 89.8969% ( 42) 00:09:46.900 12.684 - 12.742: 90.1375% ( 28) 00:09:46.900 12.742 - 12.800: 90.3436% ( 24) 00:09:46.900 12.800 - 12.858: 90.4811% ( 16) 00:09:46.900 12.858 - 12.916: 90.5756% ( 11) 00:09:46.900 12.916 - 12.975: 90.7045% ( 15) 00:09:46.900 12.975 - 13.033: 90.7818% ( 9) 00:09:46.900 13.033 - 13.091: 90.8591% ( 9) 00:09:46.900 13.091 - 13.149: 90.9021% ( 5) 00:09:46.900 13.149 - 13.207: 90.9278% ( 3) 00:09:46.900 13.265 - 13.324: 90.9364% ( 1) 00:09:46.901 13.324 - 13.382: 90.9536% ( 2) 00:09:46.901 13.382 - 13.440: 90.9708% ( 2) 00:09:46.901 13.440 - 13.498: 90.9880% ( 2) 00:09:46.901 13.498 - 13.556: 91.0052% ( 2) 00:09:46.901 13.556 - 13.615: 91.0223% ( 2) 00:09:46.901 13.615 - 13.673: 91.0395% ( 2) 00:09:46.901 13.673 - 13.731: 91.0567% ( 2) 00:09:46.901 13.731 - 13.789: 91.0739% ( 2) 00:09:46.901 13.789 - 13.847: 91.0911% ( 2) 00:09:46.901 13.847 - 13.905: 91.1082% ( 2) 00:09:46.901 13.905 - 13.964: 91.1340% ( 3) 00:09:46.901 13.964 - 14.022: 91.1598% ( 3) 00:09:46.901 14.022 - 14.080: 91.2199% ( 7) 00:09:46.901 14.080 - 14.138: 91.2973% ( 9) 00:09:46.901 14.138 - 14.196: 91.3488% ( 6) 00:09:46.901 14.196 - 14.255: 91.4433% ( 11) 00:09:46.901 14.255 - 14.313: 91.5464% ( 12) 00:09:46.901 14.313 - 14.371: 91.5893% ( 5) 00:09:46.901 14.371 - 14.429: 91.6667% ( 9) 00:09:46.901 14.429 - 14.487: 91.7698% ( 12) 00:09:46.901 14.487 - 14.545: 91.8127% ( 5) 00:09:46.901 14.545 - 14.604: 91.8557% ( 5) 00:09:46.901 14.604 - 14.662: 91.9158% ( 7) 00:09:46.901 14.662 - 14.720: 91.9759% ( 7) 00:09:46.901 14.720 - 14.778: 92.0189% ( 5) 00:09:46.901 14.778 - 14.836: 92.0619% ( 5) 00:09:46.901 14.836 - 14.895: 92.1048% ( 5) 00:09:46.901 14.895 - 15.011: 92.3024% ( 23) 00:09:46.901 15.011 - 15.127: 92.8522% ( 64) 00:09:46.901 15.127 - 15.244: 93.8918% ( 121) 00:09:46.901 15.244 - 15.360: 94.8454% ( 111) 00:09:46.901 15.360 - 15.476: 95.4296% ( 68) 00:09:46.901 15.476 - 15.593: 95.6959% ( 31) 00:09:46.901 15.593 - 15.709: 95.8763% ( 21) 00:09:46.901 15.709 - 15.825: 96.0137% ( 16) 00:09:46.901 15.825 - 15.942: 96.1168% ( 12) 00:09:46.901 15.942 - 16.058: 96.2715% ( 18) 00:09:46.901 16.058 - 16.175: 96.4003% ( 15) 00:09:46.901 16.175 - 16.291: 96.5550% ( 18) 00:09:46.901 16.291 - 16.407: 96.6667% ( 13) 00:09:46.901 16.407 - 16.524: 96.7354% ( 8) 00:09:46.901 16.524 - 16.640: 96.8041% ( 8) 00:09:46.901 16.640 - 16.756: 96.9416% ( 16) 00:09:46.901 16.756 - 16.873: 97.0275% ( 10) 00:09:46.901 16.873 - 16.989: 97.0962% ( 8) 00:09:46.901 16.989 - 17.105: 97.1649% ( 8) 00:09:46.901 17.105 - 17.222: 97.2509% ( 10) 00:09:46.901 17.222 - 17.338: 97.3024% ( 6) 00:09:46.901 17.338 - 17.455: 97.4055% ( 12) 00:09:46.901 17.455 - 17.571: 97.4742% ( 8) 00:09:46.901 17.571 - 17.687: 97.5258% ( 6) 00:09:46.901 17.687 - 17.804: 97.5859% ( 7) 00:09:46.901 17.804 - 17.920: 97.6546% ( 8) 00:09:46.901 17.920 - 18.036: 97.7148% ( 7) 00:09:46.901 18.036 - 18.153: 97.7835% ( 8) 00:09:46.901 18.153 - 18.269: 97.8265% ( 5) 00:09:46.901 18.269 - 18.385: 97.9038% ( 9) 00:09:46.901 18.385 - 18.502: 97.9467% ( 5) 00:09:46.901 18.502 - 18.618: 98.0326% ( 10) 00:09:46.901 18.618 - 18.735: 98.0842% ( 6) 00:09:46.901 18.735 - 18.851: 98.1357% ( 6) 00:09:46.901 18.851 - 18.967: 98.2131% ( 9) 00:09:46.901 18.967 - 19.084: 98.2474% ( 4) 00:09:46.901 19.084 - 19.200: 98.2818% ( 4) 00:09:46.901 19.200 - 19.316: 98.3677% ( 10) 00:09:46.901 19.316 - 19.433: 98.4021% ( 4) 00:09:46.901 19.433 - 19.549: 98.4536% ( 6) 00:09:46.901 19.549 - 19.665: 98.5052% ( 6) 00:09:46.901 19.665 - 19.782: 98.5481% ( 5) 00:09:46.901 19.782 - 19.898: 98.6082% ( 7) 00:09:46.901 19.898 - 20.015: 98.6684% ( 7) 00:09:46.901 20.015 - 20.131: 98.7027% ( 4) 00:09:46.901 20.131 - 20.247: 98.7629% ( 7) 00:09:46.901 20.247 - 20.364: 98.7801% ( 2) 00:09:46.901 20.364 - 20.480: 98.7973% ( 2) 00:09:46.901 20.480 - 20.596: 98.8230% ( 3) 00:09:46.901 20.596 - 20.713: 98.8660% ( 5) 00:09:46.901 20.713 - 20.829: 98.9003% ( 4) 00:09:46.901 20.829 - 20.945: 98.9175% ( 2) 00:09:46.901 20.945 - 21.062: 98.9347% ( 2) 00:09:46.901 21.062 - 21.178: 98.9691% ( 4) 00:09:46.901 21.178 - 21.295: 98.9948% ( 3) 00:09:46.901 21.295 - 21.411: 99.0120% ( 2) 00:09:46.901 21.411 - 21.527: 99.0292% ( 2) 00:09:46.901 21.527 - 21.644: 99.0722% ( 5) 00:09:46.901 21.760 - 21.876: 99.0979% ( 3) 00:09:46.901 21.876 - 21.993: 99.1237% ( 3) 00:09:46.901 21.993 - 22.109: 99.1495% ( 3) 00:09:46.901 22.109 - 22.225: 99.1838% ( 4) 00:09:46.901 22.225 - 22.342: 99.2268% ( 5) 00:09:46.901 22.342 - 22.458: 99.2784% ( 6) 00:09:46.901 22.458 - 22.575: 99.2869% ( 1) 00:09:46.901 22.575 - 22.691: 99.2955% ( 1) 00:09:46.901 22.691 - 22.807: 99.3127% ( 2) 00:09:46.901 22.807 - 22.924: 99.3643% ( 6) 00:09:46.901 22.924 - 23.040: 99.3900% ( 3) 00:09:46.901 23.156 - 23.273: 99.4072% ( 2) 00:09:46.901 23.273 - 23.389: 99.4158% ( 1) 00:09:46.901 23.505 - 23.622: 99.4244% ( 1) 00:09:46.901 23.622 - 23.738: 99.4330% ( 1) 00:09:46.901 23.738 - 23.855: 99.4588% ( 3) 00:09:46.901 24.087 - 24.204: 99.4759% ( 2) 00:09:46.901 24.204 - 24.320: 99.4931% ( 2) 00:09:46.901 24.436 - 24.553: 99.5189% ( 3) 00:09:46.901 24.553 - 24.669: 99.5275% ( 1) 00:09:46.901 24.669 - 24.785: 99.5447% ( 2) 00:09:46.901 24.785 - 24.902: 99.5619% ( 2) 00:09:46.901 24.902 - 25.018: 99.5790% ( 2) 00:09:46.901 25.018 - 25.135: 99.6048% ( 3) 00:09:46.901 25.135 - 25.251: 99.6220% ( 2) 00:09:46.901 25.251 - 25.367: 99.6392% ( 2) 00:09:46.901 25.716 - 25.833: 99.6649% ( 3) 00:09:46.901 25.949 - 26.065: 99.6821% ( 2) 00:09:46.901 26.298 - 26.415: 99.6993% ( 2) 00:09:46.901 26.415 - 26.531: 99.7165% ( 2) 00:09:46.901 26.647 - 26.764: 99.7251% ( 1) 00:09:46.901 26.880 - 26.996: 99.7337% ( 1) 00:09:46.901 27.113 - 27.229: 99.7423% ( 1) 00:09:46.901 27.345 - 27.462: 99.7509% ( 1) 00:09:46.901 27.578 - 27.695: 99.7595% ( 1) 00:09:46.901 27.811 - 27.927: 99.7680% ( 1) 00:09:46.901 28.509 - 28.625: 99.7766% ( 1) 00:09:46.901 29.091 - 29.207: 99.7938% ( 2) 00:09:46.901 29.207 - 29.324: 99.8024% ( 1) 00:09:46.901 29.324 - 29.440: 99.8110% ( 1) 00:09:46.901 29.440 - 29.556: 99.8196% ( 1) 00:09:46.901 31.185 - 31.418: 99.8282% ( 1) 00:09:46.901 31.418 - 31.651: 99.8368% ( 1) 00:09:46.901 31.884 - 32.116: 99.8454% ( 1) 00:09:46.901 32.582 - 32.815: 99.8540% ( 1) 00:09:46.901 32.815 - 33.047: 99.8625% ( 1) 00:09:46.901 33.513 - 33.745: 99.8711% ( 1) 00:09:46.901 34.211 - 34.444: 99.8797% ( 1) 00:09:46.901 34.444 - 34.676: 99.8883% ( 1) 00:09:46.901 35.840 - 36.073: 99.8969% ( 1) 00:09:46.901 37.004 - 37.236: 99.9055% ( 1) 00:09:46.901 37.236 - 37.469: 99.9141% ( 1) 00:09:46.901 40.029 - 40.262: 99.9313% ( 2) 00:09:46.901 41.891 - 42.124: 99.9399% ( 1) 00:09:46.901 53.993 - 54.225: 99.9485% ( 1) 00:09:46.901 83.782 - 84.247: 99.9570% ( 1) 00:09:46.901 87.505 - 87.971: 99.9656% ( 1) 00:09:46.901 89.367 - 89.833: 99.9742% ( 1) 00:09:46.901 132.189 - 133.120: 99.9828% ( 1) 00:09:46.901 150.807 - 151.738: 99.9914% ( 1) 00:09:46.901 198.284 - 199.215: 100.0000% ( 1) 00:09:46.901 00:09:46.901 ************************************ 00:09:46.901 END TEST nvme_overhead 00:09:46.901 ************************************ 00:09:46.901 00:09:46.901 real 0m1.291s 00:09:46.901 user 0m1.116s 00:09:46.901 sys 0m0.125s 00:09:46.901 18:54:17 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.901 18:54:17 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:46.901 18:54:17 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:46.901 18:54:17 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:46.901 18:54:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.901 18:54:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:46.901 ************************************ 00:09:46.901 START TEST nvme_arbitration 00:09:46.901 ************************************ 00:09:46.901 18:54:17 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:50.183 Initializing NVMe Controllers 00:09:50.183 Attached to 0000:00:10.0 00:09:50.183 Attached to 0000:00:11.0 00:09:50.183 Attached to 0000:00:13.0 00:09:50.183 Attached to 0000:00:12.0 00:09:50.183 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:50.183 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:50.183 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:50.183 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:50.183 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:50.183 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:50.183 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:50.183 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:50.183 Initialization complete. Launching workers. 00:09:50.183 Starting thread on core 1 with urgent priority queue 00:09:50.183 Starting thread on core 2 with urgent priority queue 00:09:50.183 Starting thread on core 3 with urgent priority queue 00:09:50.183 Starting thread on core 0 with urgent priority queue 00:09:50.183 QEMU NVMe Ctrl (12340 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:09:50.183 QEMU NVMe Ctrl (12342 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:09:50.183 QEMU NVMe Ctrl (12341 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:09:50.183 QEMU NVMe Ctrl (12342 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:09:50.183 QEMU NVMe Ctrl (12343 ) core 2: 682.67 IO/s 146.48 secs/100000 ios 00:09:50.183 QEMU NVMe Ctrl (12342 ) core 3: 704.00 IO/s 142.05 secs/100000 ios 00:09:50.183 ======================================================== 00:09:50.183 00:09:50.183 00:09:50.183 real 0m3.420s 00:09:50.183 user 0m9.313s 00:09:50.183 sys 0m0.184s 00:09:50.183 18:54:21 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.183 ************************************ 00:09:50.183 END TEST nvme_arbitration 00:09:50.183 ************************************ 00:09:50.183 18:54:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:50.441 18:54:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:50.441 18:54:21 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.441 18:54:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.441 18:54:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.441 ************************************ 00:09:50.441 START TEST nvme_single_aen 00:09:50.441 ************************************ 00:09:50.441 18:54:21 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:50.720 Asynchronous Event Request test 00:09:50.720 Attached to 0000:00:10.0 00:09:50.720 Attached to 0000:00:11.0 00:09:50.720 Attached to 0000:00:13.0 00:09:50.720 Attached to 0000:00:12.0 00:09:50.720 Reset controller to setup AER completions for this process 00:09:50.720 Registering asynchronous event callbacks... 00:09:50.720 Getting orig temperature thresholds of all controllers 00:09:50.720 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.720 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.720 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.720 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:50.720 Setting all controllers temperature threshold low to trigger AER 00:09:50.720 Waiting for all controllers temperature threshold to be set lower 00:09:50.720 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.720 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:50.720 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.720 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:50.720 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.720 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:50.720 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:50.720 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:50.720 Waiting for all controllers to trigger AER and reset threshold 00:09:50.720 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.720 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.720 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.720 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:50.720 Cleaning up... 00:09:50.720 ************************************ 00:09:50.720 END TEST nvme_single_aen 00:09:50.720 ************************************ 00:09:50.720 00:09:50.720 real 0m0.296s 00:09:50.720 user 0m0.118s 00:09:50.720 sys 0m0.129s 00:09:50.720 18:54:21 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.720 18:54:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:50.720 18:54:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:50.720 18:54:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.720 18:54:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.720 18:54:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.720 ************************************ 00:09:50.720 START TEST nvme_doorbell_aers 00:09:50.720 ************************************ 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:50.720 18:54:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:50.978 [2024-11-26 18:54:22.177515] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:00.944 Executing: test_write_invalid_db 00:10:00.944 Waiting for AER completion... 00:10:00.944 Failure: test_write_invalid_db 00:10:00.944 00:10:00.944 Executing: test_invalid_db_write_overflow_sq 00:10:00.944 Waiting for AER completion... 00:10:00.944 Failure: test_invalid_db_write_overflow_sq 00:10:00.944 00:10:00.944 Executing: test_invalid_db_write_overflow_cq 00:10:00.944 Waiting for AER completion... 00:10:00.944 Failure: test_invalid_db_write_overflow_cq 00:10:00.944 00:10:00.944 18:54:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:00.944 18:54:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:01.203 [2024-11-26 18:54:32.204550] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:11.165 Executing: test_write_invalid_db 00:10:11.165 Waiting for AER completion... 00:10:11.165 Failure: test_write_invalid_db 00:10:11.165 00:10:11.165 Executing: test_invalid_db_write_overflow_sq 00:10:11.165 Waiting for AER completion... 00:10:11.165 Failure: test_invalid_db_write_overflow_sq 00:10:11.165 00:10:11.165 Executing: test_invalid_db_write_overflow_cq 00:10:11.165 Waiting for AER completion... 00:10:11.165 Failure: test_invalid_db_write_overflow_cq 00:10:11.165 00:10:11.165 18:54:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.165 18:54:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:11.165 [2024-11-26 18:54:42.260121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:21.126 Executing: test_write_invalid_db 00:10:21.126 Waiting for AER completion... 00:10:21.126 Failure: test_write_invalid_db 00:10:21.126 00:10:21.126 Executing: test_invalid_db_write_overflow_sq 00:10:21.126 Waiting for AER completion... 00:10:21.126 Failure: test_invalid_db_write_overflow_sq 00:10:21.126 00:10:21.126 Executing: test_invalid_db_write_overflow_cq 00:10:21.126 Waiting for AER completion... 00:10:21.126 Failure: test_invalid_db_write_overflow_cq 00:10:21.126 00:10:21.126 18:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:21.126 18:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:21.126 [2024-11-26 18:54:52.315237] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.178 Executing: test_write_invalid_db 00:10:31.178 Waiting for AER completion... 00:10:31.178 Failure: test_write_invalid_db 00:10:31.178 00:10:31.178 Executing: test_invalid_db_write_overflow_sq 00:10:31.178 Waiting for AER completion... 00:10:31.178 Failure: test_invalid_db_write_overflow_sq 00:10:31.178 00:10:31.178 Executing: test_invalid_db_write_overflow_cq 00:10:31.178 Waiting for AER completion... 00:10:31.178 Failure: test_invalid_db_write_overflow_cq 00:10:31.178 00:10:31.178 00:10:31.178 real 0m40.263s 00:10:31.178 user 0m34.230s 00:10:31.178 sys 0m5.617s 00:10:31.178 18:55:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.178 18:55:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:31.178 ************************************ 00:10:31.178 END TEST nvme_doorbell_aers 00:10:31.178 ************************************ 00:10:31.178 18:55:02 nvme -- nvme/nvme.sh@97 -- # uname 00:10:31.178 18:55:02 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:31.178 18:55:02 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:31.178 18:55:02 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:31.178 18:55:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.178 18:55:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.178 ************************************ 00:10:31.178 START TEST nvme_multi_aen 00:10:31.178 ************************************ 00:10:31.179 18:55:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:31.179 [2024-11-26 18:55:02.381199] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.381307] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.381329] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.382988] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.383042] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.383061] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.384476] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.384677] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.384702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.385975] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.386016] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.179 [2024-11-26 18:55:02.386034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64926) is not found. Dropping the request. 00:10:31.436 Child process pid: 65442 00:10:31.694 [Child] Asynchronous Event Request test 00:10:31.694 [Child] Attached to 0000:00:10.0 00:10:31.694 [Child] Attached to 0000:00:11.0 00:10:31.694 [Child] Attached to 0000:00:13.0 00:10:31.694 [Child] Attached to 0000:00:12.0 00:10:31.694 [Child] Registering asynchronous event callbacks... 00:10:31.694 [Child] Getting orig temperature thresholds of all controllers 00:10:31.694 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:31.694 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.694 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.694 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.694 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.694 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.694 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.694 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.694 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.694 [Child] Cleaning up... 00:10:31.694 Asynchronous Event Request test 00:10:31.694 Attached to 0000:00:10.0 00:10:31.694 Attached to 0000:00:11.0 00:10:31.694 Attached to 0000:00:13.0 00:10:31.694 Attached to 0000:00:12.0 00:10:31.694 Reset controller to setup AER completions for this process 00:10:31.694 Registering asynchronous event callbacks... 00:10:31.694 Getting orig temperature thresholds of all controllers 00:10:31.694 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:31.694 Setting all controllers temperature threshold low to trigger AER 00:10:31.694 Waiting for all controllers temperature threshold to be set lower 00:10:31.694 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.694 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:31.695 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.695 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:31.695 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.695 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:31.695 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:31.695 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:31.695 Waiting for all controllers to trigger AER and reset threshold 00:10:31.695 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.695 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.695 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.695 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:31.695 Cleaning up... 00:10:31.695 00:10:31.695 real 0m0.637s 00:10:31.695 user 0m0.220s 00:10:31.695 sys 0m0.286s 00:10:31.695 18:55:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.695 18:55:02 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 ************************************ 00:10:31.695 END TEST nvme_multi_aen 00:10:31.695 ************************************ 00:10:31.695 18:55:02 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:31.695 18:55:02 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:31.695 18:55:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.695 18:55:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.695 ************************************ 00:10:31.695 START TEST nvme_startup 00:10:31.695 ************************************ 00:10:31.695 18:55:02 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:31.952 Initializing NVMe Controllers 00:10:31.952 Attached to 0000:00:10.0 00:10:31.952 Attached to 0000:00:11.0 00:10:31.952 Attached to 0000:00:13.0 00:10:31.952 Attached to 0000:00:12.0 00:10:31.952 Initialization complete. 00:10:31.952 Time used:245464.578 (us). 00:10:31.952 00:10:31.952 real 0m0.361s 00:10:31.952 user 0m0.156s 00:10:31.952 sys 0m0.151s 00:10:31.952 ************************************ 00:10:31.952 END TEST nvme_startup 00:10:31.952 ************************************ 00:10:31.952 18:55:03 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.952 18:55:03 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:32.211 18:55:03 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:32.211 18:55:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.211 18:55:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.211 18:55:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:32.211 ************************************ 00:10:32.211 START TEST nvme_multi_secondary 00:10:32.211 ************************************ 00:10:32.211 18:55:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:32.211 18:55:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65498 00:10:32.211 18:55:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:32.211 18:55:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65499 00:10:32.211 18:55:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:32.211 18:55:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:35.497 Initializing NVMe Controllers 00:10:35.497 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:35.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:35.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:35.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:35.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:35.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:35.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:35.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:35.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:35.498 Initialization complete. Launching workers. 00:10:35.498 ======================================================== 00:10:35.498 Latency(us) 00:10:35.498 Device Information : IOPS MiB/s Average min max 00:10:35.498 PCIE (0000:00:10.0) NSID 1 from core 2: 2080.00 8.13 7689.73 1486.50 20596.78 00:10:35.498 PCIE (0000:00:11.0) NSID 1 from core 2: 2080.00 8.13 7692.45 1449.40 21123.53 00:10:35.498 PCIE (0000:00:13.0) NSID 1 from core 2: 2080.00 8.13 7693.54 1490.34 15111.34 00:10:35.498 PCIE (0000:00:12.0) NSID 1 from core 2: 2080.00 8.13 7693.43 1442.01 19036.77 00:10:35.498 PCIE (0000:00:12.0) NSID 2 from core 2: 2080.00 8.13 7694.06 1592.03 17228.99 00:10:35.498 PCIE (0000:00:12.0) NSID 3 from core 2: 2085.32 8.15 7675.55 1509.66 20316.46 00:10:35.498 ======================================================== 00:10:35.498 Total : 12485.35 48.77 7689.79 1442.01 21123.53 00:10:35.498 00:10:35.755 Initializing NVMe Controllers 00:10:35.756 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:35.756 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.756 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:35.756 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:35.756 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:35.756 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:35.756 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:35.756 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:35.756 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:35.756 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:35.756 Initialization complete. Launching workers. 00:10:35.756 ======================================================== 00:10:35.756 Latency(us) 00:10:35.756 Device Information : IOPS MiB/s Average min max 00:10:35.756 PCIE (0000:00:10.0) NSID 1 from core 1: 4463.85 17.44 3582.11 1493.34 9406.76 00:10:35.756 PCIE (0000:00:11.0) NSID 1 from core 1: 4463.85 17.44 3584.10 1568.25 9279.48 00:10:35.756 PCIE (0000:00:13.0) NSID 1 from core 1: 4463.85 17.44 3584.04 1494.24 9390.86 00:10:35.756 PCIE (0000:00:12.0) NSID 1 from core 1: 4463.85 17.44 3584.01 1433.21 9800.70 00:10:35.756 PCIE (0000:00:12.0) NSID 2 from core 1: 4463.85 17.44 3584.48 1431.31 9795.63 00:10:35.756 PCIE (0000:00:12.0) NSID 3 from core 1: 4463.85 17.44 3584.72 1428.00 9520.95 00:10:35.756 ======================================================== 00:10:35.756 Total : 26783.12 104.62 3583.91 1428.00 9800.70 00:10:35.756 00:10:35.756 18:55:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65498 00:10:37.655 Initializing NVMe Controllers 00:10:37.655 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:37.655 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:37.655 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:37.655 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:37.655 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:37.655 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:37.655 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:37.655 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:37.655 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:37.655 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:37.655 Initialization complete. Launching workers. 00:10:37.655 ======================================================== 00:10:37.655 Latency(us) 00:10:37.655 Device Information : IOPS MiB/s Average min max 00:10:37.655 PCIE (0000:00:10.0) NSID 1 from core 0: 6927.64 27.06 2307.70 961.88 9361.30 00:10:37.655 PCIE (0000:00:11.0) NSID 1 from core 0: 6927.64 27.06 2308.95 1000.79 9339.84 00:10:37.655 PCIE (0000:00:13.0) NSID 1 from core 0: 6927.64 27.06 2308.93 990.86 8226.36 00:10:37.655 PCIE (0000:00:12.0) NSID 1 from core 0: 6927.64 27.06 2308.89 948.19 7820.53 00:10:37.655 PCIE (0000:00:12.0) NSID 2 from core 0: 6927.64 27.06 2308.83 904.25 7791.48 00:10:37.655 PCIE (0000:00:12.0) NSID 3 from core 0: 6927.64 27.06 2308.77 839.28 8543.17 00:10:37.655 ======================================================== 00:10:37.655 Total : 41565.85 162.37 2308.68 839.28 9361.30 00:10:37.655 00:10:37.655 18:55:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65499 00:10:37.655 18:55:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65568 00:10:37.655 18:55:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:37.655 18:55:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65569 00:10:37.655 18:55:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:37.655 18:55:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:40.940 Initializing NVMe Controllers 00:10:40.940 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:40.940 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:40.940 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:40.940 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:40.940 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:40.940 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:40.940 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:40.940 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:40.940 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:40.940 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:40.940 Initialization complete. Launching workers. 00:10:40.940 ======================================================== 00:10:40.940 Latency(us) 00:10:40.940 Device Information : IOPS MiB/s Average min max 00:10:40.940 PCIE (0000:00:10.0) NSID 1 from core 0: 4664.66 18.22 3427.97 1272.34 9129.76 00:10:40.940 PCIE (0000:00:11.0) NSID 1 from core 0: 4669.99 18.24 3425.57 1298.14 9501.75 00:10:40.940 PCIE (0000:00:13.0) NSID 1 from core 0: 4664.66 18.22 3429.33 1300.79 7967.98 00:10:40.940 PCIE (0000:00:12.0) NSID 1 from core 0: 4664.66 18.22 3429.22 1313.34 8320.99 00:10:40.940 PCIE (0000:00:12.0) NSID 2 from core 0: 4664.66 18.22 3429.20 1318.50 8268.49 00:10:40.940 PCIE (0000:00:12.0) NSID 3 from core 0: 4664.66 18.22 3429.03 1309.89 9130.29 00:10:40.940 ======================================================== 00:10:40.940 Total : 27993.30 109.35 3428.39 1272.34 9501.75 00:10:40.940 00:10:41.198 Initializing NVMe Controllers 00:10:41.198 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:41.198 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:41.198 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:41.198 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:41.198 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:41.198 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:41.198 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:41.198 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:41.198 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:41.198 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:41.198 Initialization complete. Launching workers. 00:10:41.198 ======================================================== 00:10:41.198 Latency(us) 00:10:41.198 Device Information : IOPS MiB/s Average min max 00:10:41.198 PCIE (0000:00:10.0) NSID 1 from core 1: 5001.91 19.54 3196.74 978.03 7710.86 00:10:41.198 PCIE (0000:00:11.0) NSID 1 from core 1: 4996.58 19.52 3201.50 993.22 8037.77 00:10:41.198 PCIE (0000:00:13.0) NSID 1 from core 1: 4996.58 19.52 3201.43 920.44 8300.46 00:10:41.198 PCIE (0000:00:12.0) NSID 1 from core 1: 4996.58 19.52 3201.34 891.30 8700.85 00:10:41.198 PCIE (0000:00:12.0) NSID 2 from core 1: 4996.58 19.52 3201.26 859.44 8014.48 00:10:41.198 PCIE (0000:00:12.0) NSID 3 from core 1: 4996.58 19.52 3201.17 821.72 7795.75 00:10:41.198 ======================================================== 00:10:41.198 Total : 29984.83 117.13 3200.57 821.72 8700.85 00:10:41.198 00:10:43.101 Initializing NVMe Controllers 00:10:43.101 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:43.101 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:43.101 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:43.101 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:43.101 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:43.101 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:43.101 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:43.101 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:43.101 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:43.101 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:43.101 Initialization complete. Launching workers. 00:10:43.101 ======================================================== 00:10:43.101 Latency(us) 00:10:43.101 Device Information : IOPS MiB/s Average min max 00:10:43.101 PCIE (0000:00:10.0) NSID 1 from core 2: 3324.67 12.99 4810.57 973.10 14573.13 00:10:43.101 PCIE (0000:00:11.0) NSID 1 from core 2: 3324.67 12.99 4811.73 1008.25 14904.91 00:10:43.101 PCIE (0000:00:13.0) NSID 1 from core 2: 3324.67 12.99 4811.66 926.41 15459.28 00:10:43.101 PCIE (0000:00:12.0) NSID 1 from core 2: 3324.67 12.99 4811.85 902.28 14214.48 00:10:43.101 PCIE (0000:00:12.0) NSID 2 from core 2: 3324.67 12.99 4807.92 848.14 13831.22 00:10:43.101 PCIE (0000:00:12.0) NSID 3 from core 2: 3324.67 12.99 4807.63 823.71 14000.86 00:10:43.101 ======================================================== 00:10:43.101 Total : 19947.99 77.92 4810.23 823.71 15459.28 00:10:43.101 00:10:43.101 ************************************ 00:10:43.101 END TEST nvme_multi_secondary 00:10:43.101 ************************************ 00:10:43.101 18:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65568 00:10:43.101 18:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65569 00:10:43.101 00:10:43.101 real 0m10.801s 00:10:43.101 user 0m18.753s 00:10:43.101 sys 0m1.090s 00:10:43.101 18:55:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.101 18:55:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:43.101 18:55:14 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:43.101 18:55:14 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:43.101 18:55:14 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64518 ]] 00:10:43.101 18:55:14 nvme -- common/autotest_common.sh@1094 -- # kill 64518 00:10:43.101 18:55:14 nvme -- common/autotest_common.sh@1095 -- # wait 64518 00:10:43.101 [2024-11-26 18:55:14.018310] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.018412] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.018465] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.018495] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.021517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.021854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.021898] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.021937] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.024579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.024638] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.024661] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.024681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.101 [2024-11-26 18:55:14.027000] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.102 [2024-11-26 18:55:14.027068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.102 [2024-11-26 18:55:14.027090] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.102 [2024-11-26 18:55:14.027111] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65441) is not found. Dropping the request. 00:10:43.102 18:55:14 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:43.102 18:55:14 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:43.102 18:55:14 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:43.102 18:55:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.102 18:55:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.102 18:55:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.102 ************************************ 00:10:43.102 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:43.102 ************************************ 00:10:43.102 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:43.102 * Looking for test storage... 00:10:43.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:43.360 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.361 --rc genhtml_branch_coverage=1 00:10:43.361 --rc genhtml_function_coverage=1 00:10:43.361 --rc genhtml_legend=1 00:10:43.361 --rc geninfo_all_blocks=1 00:10:43.361 --rc geninfo_unexecuted_blocks=1 00:10:43.361 00:10:43.361 ' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.361 --rc genhtml_branch_coverage=1 00:10:43.361 --rc genhtml_function_coverage=1 00:10:43.361 --rc genhtml_legend=1 00:10:43.361 --rc geninfo_all_blocks=1 00:10:43.361 --rc geninfo_unexecuted_blocks=1 00:10:43.361 00:10:43.361 ' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.361 --rc genhtml_branch_coverage=1 00:10:43.361 --rc genhtml_function_coverage=1 00:10:43.361 --rc genhtml_legend=1 00:10:43.361 --rc geninfo_all_blocks=1 00:10:43.361 --rc geninfo_unexecuted_blocks=1 00:10:43.361 00:10:43.361 ' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.361 --rc genhtml_branch_coverage=1 00:10:43.361 --rc genhtml_function_coverage=1 00:10:43.361 --rc genhtml_legend=1 00:10:43.361 --rc geninfo_all_blocks=1 00:10:43.361 --rc geninfo_unexecuted_blocks=1 00:10:43.361 00:10:43.361 ' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65734 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65734 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65734 ']' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.361 18:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:43.620 [2024-11-26 18:55:14.610387] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:10:43.620 [2024-11-26 18:55:14.610539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65734 ] 00:10:43.620 [2024-11-26 18:55:14.804876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.878 [2024-11-26 18:55:14.950276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.878 [2024-11-26 18:55:14.950348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.878 [2024-11-26 18:55:14.950420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.878 [2024-11-26 18:55:14.950432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:44.813 nvme0n1 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_JNKMn.txt 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:44.813 true 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732647315 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65758 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:44.813 18:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:46.712 [2024-11-26 18:55:17.886185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:46.712 [2024-11-26 18:55:17.886604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:46.712 [2024-11-26 18:55:17.886655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:46.712 [2024-11-26 18:55:17.886678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.712 [2024-11-26 18:55:17.888613] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:46.712 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65758 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65758 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65758 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.712 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_JNKMn.txt 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:46.970 18:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_JNKMn.txt 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65734 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65734 ']' 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65734 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65734 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.970 killing process with pid 65734 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65734' 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65734 00:10:46.970 18:55:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65734 00:10:49.498 18:55:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:49.498 18:55:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:49.498 00:10:49.498 real 0m6.024s 00:10:49.498 user 0m21.396s 00:10:49.498 sys 0m0.643s 00:10:49.498 18:55:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.498 ************************************ 00:10:49.498 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:49.498 18:55:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:49.498 ************************************ 00:10:49.498 18:55:20 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:49.498 18:55:20 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:49.498 18:55:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.498 18:55:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.498 18:55:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.498 ************************************ 00:10:49.498 START TEST nvme_fio 00:10:49.498 ************************************ 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:49.498 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:49.498 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:49.756 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:49.756 18:55:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:49.756 18:55:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:50.014 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:50.014 fio-3.35 00:10:50.014 Starting 1 thread 00:10:53.310 00:10:53.310 test: (groupid=0, jobs=1): err= 0: pid=65910: Tue Nov 26 18:55:24 2024 00:10:53.310 read: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec) 00:10:53.310 slat (nsec): min=4574, max=53896, avg=6253.66, stdev=2254.54 00:10:53.310 clat (usec): min=342, max=9138, avg=3989.57, stdev=882.77 00:10:53.310 lat (usec): min=347, max=9143, avg=3995.82, stdev=883.95 00:10:53.310 clat percentiles (usec): 00:10:53.310 | 1.00th=[ 2573], 5.00th=[ 2999], 10.00th=[ 3294], 20.00th=[ 3490], 00:10:53.310 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3851], 00:10:53.310 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 6194], 00:10:53.310 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7832], 99.95th=[ 8586], 00:10:53.310 | 99.99th=[ 8979] 00:10:53.310 bw ( KiB/s): min=51576, max=72480, per=99.66%, avg=63629.33, stdev=10813.75, samples=3 00:10:53.310 iops : min=12894, max=18120, avg=15907.33, stdev=2703.44, samples=3 00:10:53.310 write: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(125MiB/2001msec); 0 zone resets 00:10:53.310 slat (nsec): min=4688, max=48636, avg=6312.46, stdev=2285.03 00:10:53.310 clat (usec): min=245, max=8965, avg=3989.03, stdev=884.31 00:10:53.310 lat (usec): min=251, max=8970, avg=3995.35, stdev=885.51 00:10:53.310 clat percentiles (usec): 00:10:53.310 | 1.00th=[ 2507], 5.00th=[ 2999], 10.00th=[ 3261], 20.00th=[ 3490], 00:10:53.310 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3851], 00:10:53.310 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 6194], 00:10:53.310 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 8356], 00:10:53.310 | 99.99th=[ 8848] 00:10:53.310 bw ( KiB/s): min=50960, max=71976, per=98.96%, avg=63325.33, stdev=10989.41, samples=3 00:10:53.310 iops : min=12740, max=17994, avg=15831.33, stdev=2747.35, samples=3 00:10:53.310 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:53.310 lat (msec) : 2=0.21%, 4=66.82%, 10=32.93% 00:10:53.310 cpu : usr=98.90%, sys=0.10%, ctx=3, majf=0, minf=606 00:10:53.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:53.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.310 issued rwts: total=31940,32010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.310 00:10:53.310 Run status group 0 (all jobs): 00:10:53.310 READ: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:10:53.310 WRITE: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=125MiB (131MB), run=2001-2001msec 00:10:53.310 ----------------------------------------------------- 00:10:53.310 Suppressions used: 00:10:53.310 count bytes template 00:10:53.310 1 32 /usr/src/fio/parse.c 00:10:53.310 1 8 libtcmalloc_minimal.so 00:10:53.310 ----------------------------------------------------- 00:10:53.310 00:10:53.310 18:55:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:53.310 18:55:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:53.310 18:55:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:53.310 18:55:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:53.568 18:55:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:53.568 18:55:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:53.826 18:55:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:53.826 18:55:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:53.826 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:54.085 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:54.085 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:54.085 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:54.085 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:54.085 18:55:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:54.085 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:54.085 fio-3.35 00:10:54.085 Starting 1 thread 00:10:57.369 00:10:57.369 test: (groupid=0, jobs=1): err= 0: pid=65976: Tue Nov 26 18:55:28 2024 00:10:57.369 read: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2001msec) 00:10:57.369 slat (nsec): min=4511, max=53677, avg=7029.56, stdev=2722.90 00:10:57.369 clat (usec): min=235, max=11668, avg=4609.70, stdev=961.84 00:10:57.370 lat (usec): min=241, max=11716, avg=4616.73, stdev=963.09 00:10:57.370 clat percentiles (usec): 00:10:57.370 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3785], 00:10:57.370 | 30.00th=[ 4080], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4686], 00:10:57.370 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5866], 95.00th=[ 6849], 00:10:57.370 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 9241], 99.95th=[ 9896], 00:10:57.370 | 99.99th=[11600] 00:10:57.370 bw ( KiB/s): min=52191, max=58992, per=100.00%, avg=55850.33, stdev=3429.92, samples=3 00:10:57.370 iops : min=13047, max=14748, avg=13962.33, stdev=857.88, samples=3 00:10:57.370 write: IOPS=13.8k, BW=54.0MiB/s (56.7MB/s)(108MiB/2001msec); 0 zone resets 00:10:57.370 slat (nsec): min=4658, max=54904, avg=7187.10, stdev=2715.47 00:10:57.370 clat (usec): min=299, max=11466, avg=4607.71, stdev=954.62 00:10:57.370 lat (usec): min=305, max=11475, avg=4614.89, stdev=955.86 00:10:57.370 clat percentiles (usec): 00:10:57.370 | 1.00th=[ 3195], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3785], 00:10:57.370 | 30.00th=[ 4080], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4686], 00:10:57.370 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5866], 95.00th=[ 6849], 00:10:57.370 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 9241], 99.95th=[ 9765], 00:10:57.370 | 99.99th=[11207] 00:10:57.370 bw ( KiB/s): min=52471, max=58768, per=100.00%, avg=55863.67, stdev=3176.78, samples=3 00:10:57.370 iops : min=13117, max=14692, avg=13965.67, stdev=794.59, samples=3 00:10:57.370 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:10:57.370 lat (msec) : 2=0.05%, 4=28.04%, 10=71.81%, 20=0.05% 00:10:57.370 cpu : usr=98.65%, sys=0.10%, ctx=3, majf=0, minf=607 00:10:57.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:57.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.370 issued rwts: total=27699,27686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.370 00:10:57.370 Run status group 0 (all jobs): 00:10:57.370 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (113MB), run=2001-2001msec 00:10:57.370 WRITE: bw=54.0MiB/s (56.7MB/s), 54.0MiB/s-54.0MiB/s (56.7MB/s-56.7MB/s), io=108MiB (113MB), run=2001-2001msec 00:10:57.370 ----------------------------------------------------- 00:10:57.370 Suppressions used: 00:10:57.370 count bytes template 00:10:57.370 1 32 /usr/src/fio/parse.c 00:10:57.370 1 8 libtcmalloc_minimal.so 00:10:57.370 ----------------------------------------------------- 00:10:57.370 00:10:57.370 18:55:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:57.370 18:55:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:57.370 18:55:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:57.370 18:55:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:57.629 18:55:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:57.629 18:55:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:58.196 18:55:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:58.196 18:55:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:58.196 18:55:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:58.196 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:58.196 fio-3.35 00:10:58.196 Starting 1 thread 00:11:01.590 00:11:01.590 test: (groupid=0, jobs=1): err= 0: pid=66038: Tue Nov 26 18:55:32 2024 00:11:01.590 read: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(98.3MiB/2001msec) 00:11:01.590 slat (nsec): min=4558, max=57626, avg=8913.15, stdev=5067.57 00:11:01.590 clat (usec): min=250, max=10626, avg=5074.00, stdev=1520.63 00:11:01.590 lat (usec): min=255, max=10632, avg=5082.91, stdev=1524.50 00:11:01.590 clat percentiles (usec): 00:11:01.590 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3687], 00:11:01.590 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4359], 60.00th=[ 5669], 00:11:01.590 | 70.00th=[ 6325], 80.00th=[ 6718], 90.00th=[ 7308], 95.00th=[ 7504], 00:11:01.590 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[ 9896], 99.95th=[10159], 00:11:01.590 | 99.99th=[10421] 00:11:01.590 bw ( KiB/s): min=39736, max=53944, per=92.07%, avg=46330.67, stdev=7158.57, samples=3 00:11:01.590 iops : min= 9934, max=13486, avg=11582.67, stdev=1789.64, samples=3 00:11:01.590 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(98.2MiB/2001msec); 0 zone resets 00:11:01.590 slat (nsec): min=4653, max=58184, avg=9057.99, stdev=5056.39 00:11:01.590 clat (usec): min=349, max=10557, avg=5068.46, stdev=1519.38 00:11:01.590 lat (usec): min=355, max=10566, avg=5077.51, stdev=1523.26 00:11:01.590 clat percentiles (usec): 00:11:01.590 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3687], 00:11:01.590 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4359], 60.00th=[ 5669], 00:11:01.590 | 70.00th=[ 6325], 80.00th=[ 6718], 90.00th=[ 7308], 95.00th=[ 7439], 00:11:01.590 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[ 9896], 99.95th=[10159], 00:11:01.590 | 99.99th=[10421] 00:11:01.590 bw ( KiB/s): min=39808, max=54176, per=92.03%, avg=46266.67, stdev=7293.02, samples=3 00:11:01.590 iops : min= 9952, max=13544, avg=11566.67, stdev=1823.26, samples=3 00:11:01.590 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:11:01.590 lat (msec) : 2=0.10%, 4=40.24%, 10=59.54%, 20=0.08% 00:11:01.590 cpu : usr=98.75%, sys=0.00%, ctx=4, majf=0, minf=606 00:11:01.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:01.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.590 issued rwts: total=25174,25148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.590 00:11:01.590 Run status group 0 (all jobs): 00:11:01.590 READ: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=98.3MiB (103MB), run=2001-2001msec 00:11:01.590 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=98.2MiB (103MB), run=2001-2001msec 00:11:01.590 ----------------------------------------------------- 00:11:01.590 Suppressions used: 00:11:01.590 count bytes template 00:11:01.590 1 32 /usr/src/fio/parse.c 00:11:01.590 1 8 libtcmalloc_minimal.so 00:11:01.590 ----------------------------------------------------- 00:11:01.590 00:11:01.590 18:55:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:01.590 18:55:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:01.590 18:55:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:01.590 18:55:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:01.848 18:55:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:01.848 18:55:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:02.105 18:55:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:02.105 18:55:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:02.105 18:55:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:02.363 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:02.363 fio-3.35 00:11:02.363 Starting 1 thread 00:11:06.571 00:11:06.571 test: (groupid=0, jobs=1): err= 0: pid=66099: Tue Nov 26 18:55:37 2024 00:11:06.571 read: IOPS=15.6k, BW=60.8MiB/s (63.7MB/s)(122MiB/2001msec) 00:11:06.571 slat (usec): min=4, max=615, avg= 6.28, stdev= 5.21 00:11:06.571 clat (usec): min=285, max=9814, avg=4101.86, stdev=1014.65 00:11:06.571 lat (usec): min=293, max=9858, avg=4108.14, stdev=1016.05 00:11:06.571 clat percentiles (usec): 00:11:06.571 | 1.00th=[ 2671], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3523], 00:11:06.571 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:11:06.571 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 5932], 95.00th=[ 6325], 00:11:06.571 | 99.00th=[ 7570], 99.50th=[ 8094], 99.90th=[ 8586], 99.95th=[ 9110], 00:11:06.571 | 99.99th=[ 9765] 00:11:06.571 bw ( KiB/s): min=53984, max=68824, per=98.66%, avg=61416.00, stdev=7420.03, samples=3 00:11:06.571 iops : min=13496, max=17206, avg=15354.00, stdev=1855.01, samples=3 00:11:06.571 write: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(122MiB/2001msec); 0 zone resets 00:11:06.571 slat (usec): min=4, max=582, avg= 6.39, stdev= 5.12 00:11:06.571 clat (usec): min=319, max=9743, avg=4091.76, stdev=1013.44 00:11:06.571 lat (usec): min=329, max=9752, avg=4098.15, stdev=1014.83 00:11:06.571 clat percentiles (usec): 00:11:06.571 | 1.00th=[ 2671], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3523], 00:11:06.571 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:11:06.571 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 5932], 95.00th=[ 6325], 00:11:06.571 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[ 8979], 00:11:06.571 | 99.99th=[ 9634] 00:11:06.571 bw ( KiB/s): min=54232, max=68272, per=97.87%, avg=60960.00, stdev=7038.20, samples=3 00:11:06.571 iops : min=13558, max=17068, avg=15240.00, stdev=1759.55, samples=3 00:11:06.571 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:06.571 lat (msec) : 2=0.17%, 4=70.53%, 10=29.28% 00:11:06.571 cpu : usr=97.80%, sys=0.65%, ctx=27, majf=0, minf=604 00:11:06.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:06.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.571 issued rwts: total=31142,31160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.571 00:11:06.571 Run status group 0 (all jobs): 00:11:06.571 READ: bw=60.8MiB/s (63.7MB/s), 60.8MiB/s-60.8MiB/s (63.7MB/s-63.7MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:06.571 WRITE: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:11:06.879 ----------------------------------------------------- 00:11:06.879 Suppressions used: 00:11:06.879 count bytes template 00:11:06.879 1 32 /usr/src/fio/parse.c 00:11:06.879 1 8 libtcmalloc_minimal.so 00:11:06.879 ----------------------------------------------------- 00:11:06.879 00:11:06.879 18:55:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:06.879 18:55:37 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:06.880 00:11:06.880 real 0m17.603s 00:11:06.880 user 0m13.820s 00:11:06.880 sys 0m3.121s 00:11:06.880 18:55:37 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.880 18:55:37 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:06.880 ************************************ 00:11:06.880 END TEST nvme_fio 00:11:06.880 ************************************ 00:11:06.880 00:11:06.880 real 1m30.537s 00:11:06.880 user 3m45.076s 00:11:06.880 sys 0m15.147s 00:11:06.880 ************************************ 00:11:06.880 END TEST nvme 00:11:06.880 ************************************ 00:11:06.880 18:55:37 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.880 18:55:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:06.880 18:55:37 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:06.880 18:55:37 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:06.880 18:55:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.880 18:55:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.880 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.880 ************************************ 00:11:06.880 START TEST nvme_scc 00:11:06.880 ************************************ 00:11:06.880 18:55:37 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:06.880 * Looking for test storage... 00:11:06.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:06.880 18:55:38 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.880 18:55:38 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.880 18:55:38 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.142 18:55:38 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.142 18:55:38 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:07.142 18:55:38 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.142 18:55:38 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.142 --rc genhtml_branch_coverage=1 00:11:07.142 --rc genhtml_function_coverage=1 00:11:07.142 --rc genhtml_legend=1 00:11:07.143 --rc geninfo_all_blocks=1 00:11:07.143 --rc geninfo_unexecuted_blocks=1 00:11:07.143 00:11:07.143 ' 00:11:07.143 18:55:38 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.143 --rc genhtml_branch_coverage=1 00:11:07.143 --rc genhtml_function_coverage=1 00:11:07.143 --rc genhtml_legend=1 00:11:07.143 --rc geninfo_all_blocks=1 00:11:07.143 --rc geninfo_unexecuted_blocks=1 00:11:07.143 00:11:07.143 ' 00:11:07.143 18:55:38 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.143 --rc genhtml_branch_coverage=1 00:11:07.143 --rc genhtml_function_coverage=1 00:11:07.143 --rc genhtml_legend=1 00:11:07.143 --rc geninfo_all_blocks=1 00:11:07.143 --rc geninfo_unexecuted_blocks=1 00:11:07.143 00:11:07.143 ' 00:11:07.143 18:55:38 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.143 --rc genhtml_branch_coverage=1 00:11:07.143 --rc genhtml_function_coverage=1 00:11:07.143 --rc genhtml_legend=1 00:11:07.143 --rc geninfo_all_blocks=1 00:11:07.143 --rc geninfo_unexecuted_blocks=1 00:11:07.143 00:11:07.143 ' 00:11:07.143 18:55:38 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.143 18:55:38 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.143 18:55:38 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.143 18:55:38 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.143 18:55:38 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.143 18:55:38 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.143 18:55:38 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.143 18:55:38 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.143 18:55:38 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:07.143 18:55:38 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:07.143 18:55:38 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:07.143 18:55:38 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:07.143 18:55:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:07.143 18:55:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:07.143 18:55:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:07.143 18:55:38 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:07.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.404 Waiting for block devices as requested 00:11:07.661 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.662 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.662 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.662 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.924 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:12.924 18:55:43 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:12.924 18:55:43 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:12.924 18:55:43 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:12.924 18:55:43 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:12.924 18:55:43 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.924 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.925 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:12.926 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:12.926 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.926 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.927 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.928 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.929 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:12.930 18:55:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:12.930 18:55:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:12.930 18:55:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:12.930 18:55:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.930 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:12.931 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.199 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.200 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:13.201 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.202 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.203 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.204 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.205 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:13.206 18:55:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:13.206 18:55:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:13.206 18:55:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:13.206 18:55:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.206 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.207 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.208 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.209 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.210 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:13.211 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:13.212 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.213 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.214 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:13.482 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.483 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.484 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:13.485 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.486 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.487 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:13.488 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:13.489 18:55:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:13.489 18:55:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:13.489 18:55:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:13.489 18:55:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:13.490 18:55:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.490 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.491 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.492 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:13.493 18:55:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:13.493 18:55:44 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:13.494 18:55:44 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:13.494 18:55:44 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:13.494 18:55:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:13.494 18:55:44 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:13.494 18:55:44 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.626 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.626 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.626 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.626 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.627 18:55:45 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:14.627 18:55:45 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.627 18:55:45 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.627 18:55:45 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:14.627 ************************************ 00:11:14.627 START TEST nvme_simple_copy 00:11:14.627 ************************************ 00:11:14.627 18:55:45 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:15.192 Initializing NVMe Controllers 00:11:15.192 Attaching to 0000:00:10.0 00:11:15.192 Controller supports SCC. Attached to 0000:00:10.0 00:11:15.192 Namespace ID: 1 size: 6GB 00:11:15.192 Initialization complete. 00:11:15.192 00:11:15.192 Controller QEMU NVMe Ctrl (12340 ) 00:11:15.192 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:15.192 Namespace Block Size:4096 00:11:15.192 Writing LBAs 0 to 63 with Random Data 00:11:15.192 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:15.192 LBAs matching Written Data: 64 00:11:15.192 00:11:15.192 real 0m0.318s 00:11:15.192 user 0m0.140s 00:11:15.192 sys 0m0.075s 00:11:15.192 18:55:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.192 18:55:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:15.192 ************************************ 00:11:15.192 END TEST nvme_simple_copy 00:11:15.192 ************************************ 00:11:15.192 00:11:15.192 real 0m8.178s 00:11:15.192 user 0m1.528s 00:11:15.192 sys 0m1.614s 00:11:15.192 18:55:46 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.192 18:55:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:15.192 ************************************ 00:11:15.192 END TEST nvme_scc 00:11:15.192 ************************************ 00:11:15.192 18:55:46 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:15.192 18:55:46 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:15.192 18:55:46 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:15.192 18:55:46 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:15.192 18:55:46 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:15.192 18:55:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.192 18:55:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.192 18:55:46 -- common/autotest_common.sh@10 -- # set +x 00:11:15.192 ************************************ 00:11:15.192 START TEST nvme_fdp 00:11:15.192 ************************************ 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:15.192 * Looking for test storage... 00:11:15.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.192 18:55:46 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.192 --rc genhtml_branch_coverage=1 00:11:15.192 --rc genhtml_function_coverage=1 00:11:15.192 --rc genhtml_legend=1 00:11:15.192 --rc geninfo_all_blocks=1 00:11:15.192 --rc geninfo_unexecuted_blocks=1 00:11:15.192 00:11:15.192 ' 00:11:15.192 18:55:46 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.192 --rc genhtml_branch_coverage=1 00:11:15.192 --rc genhtml_function_coverage=1 00:11:15.192 --rc genhtml_legend=1 00:11:15.192 --rc geninfo_all_blocks=1 00:11:15.192 --rc geninfo_unexecuted_blocks=1 00:11:15.193 00:11:15.193 ' 00:11:15.193 18:55:46 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.193 --rc genhtml_branch_coverage=1 00:11:15.193 --rc genhtml_function_coverage=1 00:11:15.193 --rc genhtml_legend=1 00:11:15.193 --rc geninfo_all_blocks=1 00:11:15.193 --rc geninfo_unexecuted_blocks=1 00:11:15.193 00:11:15.193 ' 00:11:15.193 18:55:46 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.193 --rc genhtml_branch_coverage=1 00:11:15.193 --rc genhtml_function_coverage=1 00:11:15.193 --rc genhtml_legend=1 00:11:15.193 --rc geninfo_all_blocks=1 00:11:15.193 --rc geninfo_unexecuted_blocks=1 00:11:15.193 00:11:15.193 ' 00:11:15.193 18:55:46 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.193 18:55:46 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.193 18:55:46 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.193 18:55:46 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.193 18:55:46 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.193 18:55:46 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.193 18:55:46 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.193 18:55:46 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.193 18:55:46 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:15.193 18:55:46 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:15.193 18:55:46 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:15.193 18:55:46 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.193 18:55:46 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:15.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.759 Waiting for block devices as requested 00:11:15.759 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.759 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:16.017 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:16.018 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:21.291 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:21.291 18:55:52 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:21.291 18:55:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:21.291 18:55:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:21.291 18:55:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:21.291 18:55:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.291 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.292 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:21.293 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:21.294 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.295 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:21.296 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:21.297 18:55:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:21.297 18:55:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:21.297 18:55:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:21.297 18:55:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.297 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:21.298 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.299 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:21.300 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:21.301 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.569 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:21.570 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:21.571 18:55:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:21.571 18:55:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:21.571 18:55:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:21.571 18:55:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:21.571 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.572 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.573 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:21.574 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:21.575 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.576 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.577 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.578 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:21.579 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:21.580 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.581 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.843 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.844 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:21.845 18:55:52 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:21.845 18:55:52 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:21.845 18:55:52 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:21.845 18:55:52 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:21.845 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.846 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.847 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:21.848 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:21.849 18:55:52 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:21.849 18:55:52 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:21.850 18:55:52 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:21.850 18:55:52 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:21.850 18:55:52 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:22.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.995 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.995 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.995 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.995 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.995 18:55:54 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:22.995 18:55:54 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:22.995 18:55:54 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.995 18:55:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:22.995 ************************************ 00:11:22.995 START TEST nvme_flexible_data_placement 00:11:22.995 ************************************ 00:11:22.995 18:55:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:23.254 Initializing NVMe Controllers 00:11:23.254 Attaching to 0000:00:13.0 00:11:23.254 Controller supports FDP Attached to 0000:00:13.0 00:11:23.254 Namespace ID: 1 Endurance Group ID: 1 00:11:23.254 Initialization complete. 00:11:23.254 00:11:23.254 ================================== 00:11:23.254 == FDP tests for Namespace: #01 == 00:11:23.254 ================================== 00:11:23.254 00:11:23.254 Get Feature: FDP: 00:11:23.254 ================= 00:11:23.254 Enabled: Yes 00:11:23.254 FDP configuration Index: 0 00:11:23.254 00:11:23.254 FDP configurations log page 00:11:23.254 =========================== 00:11:23.254 Number of FDP configurations: 1 00:11:23.254 Version: 0 00:11:23.254 Size: 112 00:11:23.254 FDP Configuration Descriptor: 0 00:11:23.254 Descriptor Size: 96 00:11:23.254 Reclaim Group Identifier format: 2 00:11:23.254 FDP Volatile Write Cache: Not Present 00:11:23.255 FDP Configuration: Valid 00:11:23.255 Vendor Specific Size: 0 00:11:23.255 Number of Reclaim Groups: 2 00:11:23.255 Number of Recalim Unit Handles: 8 00:11:23.255 Max Placement Identifiers: 128 00:11:23.255 Number of Namespaces Suppprted: 256 00:11:23.255 Reclaim unit Nominal Size: 6000000 bytes 00:11:23.255 Estimated Reclaim Unit Time Limit: Not Reported 00:11:23.255 RUH Desc #000: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #001: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #002: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #003: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #004: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #005: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #006: RUH Type: Initially Isolated 00:11:23.255 RUH Desc #007: RUH Type: Initially Isolated 00:11:23.255 00:11:23.255 FDP reclaim unit handle usage log page 00:11:23.255 ====================================== 00:11:23.255 Number of Reclaim Unit Handles: 8 00:11:23.255 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:23.255 RUH Usage Desc #001: RUH Attributes: Unused 00:11:23.255 RUH Usage Desc #002: RUH Attributes: Unused 00:11:23.255 RUH Usage Desc #003: RUH Attributes: Unused 00:11:23.255 RUH Usage Desc #004: RUH Attributes: Unused 00:11:23.255 RUH Usage Desc #005: RUH Attributes: Unused 00:11:23.255 RUH Usage Desc #006: RUH Attributes: Unused 00:11:23.255 RUH Usage Desc #007: RUH Attributes: Unused 00:11:23.255 00:11:23.255 FDP statistics log page 00:11:23.255 ======================= 00:11:23.255 Host bytes with metadata written: 823144448 00:11:23.255 Media bytes with metadata written: 823312384 00:11:23.255 Media bytes erased: 0 00:11:23.255 00:11:23.255 FDP Reclaim unit handle status 00:11:23.255 ============================== 00:11:23.255 Number of RUHS descriptors: 2 00:11:23.255 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004efd 00:11:23.255 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:23.255 00:11:23.255 FDP write on placement id: 0 success 00:11:23.255 00:11:23.255 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:23.255 00:11:23.255 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:23.255 00:11:23.255 Get Feature: FDP Events for Placement handle: #0 00:11:23.255 ======================== 00:11:23.255 Number of FDP Events: 6 00:11:23.255 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:23.255 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:23.255 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:23.255 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:23.255 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:23.255 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:23.255 00:11:23.255 FDP events log page 00:11:23.255 =================== 00:11:23.255 Number of FDP events: 1 00:11:23.255 FDP Event #0: 00:11:23.255 Event Type: RU Not Written to Capacity 00:11:23.255 Placement Identifier: Valid 00:11:23.255 NSID: Valid 00:11:23.255 Location: Valid 00:11:23.255 Placement Identifier: 0 00:11:23.255 Event Timestamp: 7 00:11:23.255 Namespace Identifier: 1 00:11:23.255 Reclaim Group Identifier: 0 00:11:23.255 Reclaim Unit Handle Identifier: 0 00:11:23.255 00:11:23.255 FDP test passed 00:11:23.255 00:11:23.255 real 0m0.289s 00:11:23.255 user 0m0.105s 00:11:23.255 sys 0m0.081s 00:11:23.255 18:55:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.255 18:55:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 ************************************ 00:11:23.255 END TEST nvme_flexible_data_placement 00:11:23.255 ************************************ 00:11:23.255 00:11:23.255 real 0m8.221s 00:11:23.255 user 0m1.541s 00:11:23.255 sys 0m1.660s 00:11:23.255 18:55:54 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.255 ************************************ 00:11:23.255 18:55:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 END TEST nvme_fdp 00:11:23.255 ************************************ 00:11:23.515 18:55:54 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:23.515 18:55:54 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:23.515 18:55:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.515 18:55:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.515 18:55:54 -- common/autotest_common.sh@10 -- # set +x 00:11:23.515 ************************************ 00:11:23.515 START TEST nvme_rpc 00:11:23.515 ************************************ 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:23.515 * Looking for test storage... 00:11:23.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.515 18:55:54 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.515 --rc genhtml_branch_coverage=1 00:11:23.515 --rc genhtml_function_coverage=1 00:11:23.515 --rc genhtml_legend=1 00:11:23.515 --rc geninfo_all_blocks=1 00:11:23.515 --rc geninfo_unexecuted_blocks=1 00:11:23.515 00:11:23.515 ' 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.515 --rc genhtml_branch_coverage=1 00:11:23.515 --rc genhtml_function_coverage=1 00:11:23.515 --rc genhtml_legend=1 00:11:23.515 --rc geninfo_all_blocks=1 00:11:23.515 --rc geninfo_unexecuted_blocks=1 00:11:23.515 00:11:23.515 ' 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.515 --rc genhtml_branch_coverage=1 00:11:23.515 --rc genhtml_function_coverage=1 00:11:23.515 --rc genhtml_legend=1 00:11:23.515 --rc geninfo_all_blocks=1 00:11:23.515 --rc geninfo_unexecuted_blocks=1 00:11:23.515 00:11:23.515 ' 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.515 --rc genhtml_branch_coverage=1 00:11:23.515 --rc genhtml_function_coverage=1 00:11:23.515 --rc genhtml_legend=1 00:11:23.515 --rc geninfo_all_blocks=1 00:11:23.515 --rc geninfo_unexecuted_blocks=1 00:11:23.515 00:11:23.515 ' 00:11:23.515 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.515 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:23.515 18:55:54 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:23.775 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:23.775 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67484 00:11:23.775 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:23.775 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:23.775 18:55:54 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67484 00:11:23.775 18:55:54 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67484 ']' 00:11:23.775 18:55:54 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.775 18:55:54 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.775 18:55:54 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.775 18:55:54 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.775 18:55:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.775 [2024-11-26 18:55:54.850202] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:11:23.775 [2024-11-26 18:55:54.850375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67484 ] 00:11:24.033 [2024-11-26 18:55:55.039075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.033 [2024-11-26 18:55:55.189356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.033 [2024-11-26 18:55:55.189368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.020 18:55:55 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.020 18:55:55 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:25.020 18:55:55 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:25.277 Nvme0n1 00:11:25.277 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:25.277 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:25.536 request: 00:11:25.536 { 00:11:25.536 "bdev_name": "Nvme0n1", 00:11:25.536 "filename": "non_existing_file", 00:11:25.536 "method": "bdev_nvme_apply_firmware", 00:11:25.536 "req_id": 1 00:11:25.536 } 00:11:25.536 Got JSON-RPC error response 00:11:25.536 response: 00:11:25.536 { 00:11:25.536 "code": -32603, 00:11:25.536 "message": "open file failed." 00:11:25.536 } 00:11:25.536 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:25.536 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:25.536 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:25.796 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:25.796 18:55:56 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67484 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67484 ']' 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67484 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67484 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67484' 00:11:25.796 killing process with pid 67484 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67484 00:11:25.796 18:55:56 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67484 00:11:28.329 00:11:28.329 real 0m4.588s 00:11:28.329 user 0m8.882s 00:11:28.329 sys 0m0.619s 00:11:28.329 18:55:59 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.329 ************************************ 00:11:28.329 END TEST nvme_rpc 00:11:28.329 ************************************ 00:11:28.329 18:55:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.329 18:55:59 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:28.329 18:55:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.329 18:55:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.329 18:55:59 -- common/autotest_common.sh@10 -- # set +x 00:11:28.329 ************************************ 00:11:28.329 START TEST nvme_rpc_timeouts 00:11:28.329 ************************************ 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:28.329 * Looking for test storage... 00:11:28.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.329 18:55:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.329 --rc genhtml_branch_coverage=1 00:11:28.329 --rc genhtml_function_coverage=1 00:11:28.329 --rc genhtml_legend=1 00:11:28.329 --rc geninfo_all_blocks=1 00:11:28.329 --rc geninfo_unexecuted_blocks=1 00:11:28.329 00:11:28.329 ' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.329 --rc genhtml_branch_coverage=1 00:11:28.329 --rc genhtml_function_coverage=1 00:11:28.329 --rc genhtml_legend=1 00:11:28.329 --rc geninfo_all_blocks=1 00:11:28.329 --rc geninfo_unexecuted_blocks=1 00:11:28.329 00:11:28.329 ' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.329 --rc genhtml_branch_coverage=1 00:11:28.329 --rc genhtml_function_coverage=1 00:11:28.329 --rc genhtml_legend=1 00:11:28.329 --rc geninfo_all_blocks=1 00:11:28.329 --rc geninfo_unexecuted_blocks=1 00:11:28.329 00:11:28.329 ' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.329 --rc genhtml_branch_coverage=1 00:11:28.329 --rc genhtml_function_coverage=1 00:11:28.329 --rc genhtml_legend=1 00:11:28.329 --rc geninfo_all_blocks=1 00:11:28.329 --rc geninfo_unexecuted_blocks=1 00:11:28.329 00:11:28.329 ' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67568 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67568 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67600 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:28.329 18:55:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67600 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67600 ']' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.329 18:55:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:28.329 [2024-11-26 18:55:59.447935] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:11:28.329 [2024-11-26 18:55:59.448096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67600 ] 00:11:28.588 [2024-11-26 18:55:59.620511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.588 [2024-11-26 18:55:59.726667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.588 [2024-11-26 18:55:59.726692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.537 18:56:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.537 18:56:00 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:29.537 Checking default timeout settings: 00:11:29.537 18:56:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:29.537 18:56:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:29.804 Making settings changes with rpc: 00:11:29.804 18:56:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:29.804 18:56:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:30.063 Check default vs. modified settings: 00:11:30.063 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:30.063 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:30.630 Setting action_on_timeout is changed as expected. 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:30.630 Setting timeout_us is changed as expected. 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:30.630 Setting timeout_admin_us is changed as expected. 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67568 /tmp/settings_modified_67568 00:11:30.630 18:56:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67600 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67600 ']' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67600 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67600 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.630 killing process with pid 67600 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67600' 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67600 00:11:30.630 18:56:01 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67600 00:11:33.162 RPC TIMEOUT SETTING TEST PASSED. 00:11:33.162 18:56:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:33.162 00:11:33.162 real 0m4.803s 00:11:33.162 user 0m9.588s 00:11:33.162 sys 0m0.640s 00:11:33.162 18:56:03 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.162 18:56:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:33.162 ************************************ 00:11:33.162 END TEST nvme_rpc_timeouts 00:11:33.162 ************************************ 00:11:33.162 18:56:03 -- spdk/autotest.sh@239 -- # uname -s 00:11:33.162 18:56:03 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:33.162 18:56:03 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:33.162 18:56:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.162 18:56:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.162 18:56:03 -- common/autotest_common.sh@10 -- # set +x 00:11:33.162 ************************************ 00:11:33.162 START TEST sw_hotplug 00:11:33.162 ************************************ 00:11:33.162 18:56:03 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:33.162 * Looking for test storage... 00:11:33.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.162 18:56:04 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.162 18:56:04 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.162 18:56:04 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.162 18:56:04 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.162 18:56:04 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.163 18:56:04 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.163 18:56:04 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:33.163 18:56:04 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.163 18:56:04 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.163 --rc genhtml_branch_coverage=1 00:11:33.163 --rc genhtml_function_coverage=1 00:11:33.163 --rc genhtml_legend=1 00:11:33.163 --rc geninfo_all_blocks=1 00:11:33.163 --rc geninfo_unexecuted_blocks=1 00:11:33.163 00:11:33.163 ' 00:11:33.163 18:56:04 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.163 --rc genhtml_branch_coverage=1 00:11:33.163 --rc genhtml_function_coverage=1 00:11:33.163 --rc genhtml_legend=1 00:11:33.163 --rc geninfo_all_blocks=1 00:11:33.163 --rc geninfo_unexecuted_blocks=1 00:11:33.163 00:11:33.163 ' 00:11:33.163 18:56:04 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.163 --rc genhtml_branch_coverage=1 00:11:33.163 --rc genhtml_function_coverage=1 00:11:33.163 --rc genhtml_legend=1 00:11:33.163 --rc geninfo_all_blocks=1 00:11:33.163 --rc geninfo_unexecuted_blocks=1 00:11:33.163 00:11:33.163 ' 00:11:33.163 18:56:04 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.163 --rc genhtml_branch_coverage=1 00:11:33.163 --rc genhtml_function_coverage=1 00:11:33.163 --rc genhtml_legend=1 00:11:33.163 --rc geninfo_all_blocks=1 00:11:33.163 --rc geninfo_unexecuted_blocks=1 00:11:33.163 00:11:33.163 ' 00:11:33.163 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:33.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.421 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.421 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.421 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.421 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:33.421 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:33.421 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:33.421 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:33.421 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:33.421 18:56:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:33.679 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:33.679 18:56:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:33.679 18:56:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:33.679 18:56:04 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:33.679 18:56:04 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:33.679 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:33.679 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:33.679 18:56:04 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:33.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.937 Waiting for block devices as requested 00:11:33.937 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.195 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.195 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.453 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.718 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:39.718 18:56:10 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:39.718 18:56:10 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:39.718 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:39.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.976 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:40.234 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:40.493 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.493 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:40.493 18:56:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68478 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:40.493 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:40.493 18:56:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:40.493 18:56:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:40.493 18:56:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:40.493 18:56:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:40.751 18:56:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:40.751 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:40.751 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:40.751 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:40.751 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:40.751 18:56:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:41.008 Initializing NVMe Controllers 00:11:41.008 Attaching to 0000:00:10.0 00:11:41.008 Attaching to 0000:00:11.0 00:11:41.008 Attached to 0000:00:10.0 00:11:41.008 Attached to 0000:00:11.0 00:11:41.008 Initialization complete. Starting I/O... 00:11:41.008 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:41.008 QEMU NVMe Ctrl (12341 ): 44 I/Os completed (+44) 00:11:41.008 00:11:41.942 QEMU NVMe Ctrl (12340 ): 1134 I/Os completed (+1134) 00:11:41.942 QEMU NVMe Ctrl (12341 ): 1326 I/Os completed (+1282) 00:11:41.942 00:11:42.876 QEMU NVMe Ctrl (12340 ): 2760 I/Os completed (+1626) 00:11:42.876 QEMU NVMe Ctrl (12341 ): 3014 I/Os completed (+1688) 00:11:42.876 00:11:43.850 QEMU NVMe Ctrl (12340 ): 4747 I/Os completed (+1987) 00:11:43.850 QEMU NVMe Ctrl (12341 ): 5277 I/Os completed (+2263) 00:11:43.850 00:11:45.227 QEMU NVMe Ctrl (12340 ): 6346 I/Os completed (+1599) 00:11:45.227 QEMU NVMe Ctrl (12341 ): 7116 I/Os completed (+1839) 00:11:45.227 00:11:46.174 QEMU NVMe Ctrl (12340 ): 7940 I/Os completed (+1594) 00:11:46.174 QEMU NVMe Ctrl (12341 ): 8810 I/Os completed (+1694) 00:11:46.174 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.761 [2024-11-26 18:56:17.715561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:46.761 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:46.761 [2024-11-26 18:56:17.718798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.718905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.718955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.719003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:46.761 [2024-11-26 18:56:17.723549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.723635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.723665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.723691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.761 [2024-11-26 18:56:17.748405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:46.761 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:46.761 [2024-11-26 18:56:17.750608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.750694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.750736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.750765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:46.761 [2024-11-26 18:56:17.753983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.754047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.754081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 [2024-11-26 18:56:17.754106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.761 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:46.761 EAL: Scan for (pci) bus failed. 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.761 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.031 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.031 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.031 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.031 18:56:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.031 Attaching to 0000:00:10.0 00:11:47.031 Attached to 0000:00:10.0 00:11:47.031 QEMU NVMe Ctrl (12340 ): 52 I/Os completed (+52) 00:11:47.031 00:11:47.031 18:56:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.031 18:56:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.031 18:56:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.031 Attaching to 0000:00:11.0 00:11:47.031 Attached to 0000:00:11.0 00:11:47.968 QEMU NVMe Ctrl (12340 ): 1663 I/Os completed (+1611) 00:11:47.968 QEMU NVMe Ctrl (12341 ): 1712 I/Os completed (+1712) 00:11:47.968 00:11:48.901 QEMU NVMe Ctrl (12340 ): 3388 I/Os completed (+1725) 00:11:48.901 QEMU NVMe Ctrl (12341 ): 3573 I/Os completed (+1861) 00:11:48.901 00:11:49.903 QEMU NVMe Ctrl (12340 ): 4815 I/Os completed (+1427) 00:11:49.903 QEMU NVMe Ctrl (12341 ): 5230 I/Os completed (+1657) 00:11:49.903 00:11:50.847 QEMU NVMe Ctrl (12340 ): 6319 I/Os completed (+1504) 00:11:50.847 QEMU NVMe Ctrl (12341 ): 6957 I/Os completed (+1727) 00:11:50.847 00:11:52.216 QEMU NVMe Ctrl (12340 ): 7923 I/Os completed (+1604) 00:11:52.216 QEMU NVMe Ctrl (12341 ): 8706 I/Os completed (+1749) 00:11:52.216 00:11:53.147 QEMU NVMe Ctrl (12340 ): 9534 I/Os completed (+1611) 00:11:53.147 QEMU NVMe Ctrl (12341 ): 10470 I/Os completed (+1764) 00:11:53.147 00:11:54.082 QEMU NVMe Ctrl (12340 ): 11141 I/Os completed (+1607) 00:11:54.082 QEMU NVMe Ctrl (12341 ): 12310 I/Os completed (+1840) 00:11:54.082 00:11:55.016 QEMU NVMe Ctrl (12340 ): 12729 I/Os completed (+1588) 00:11:55.016 QEMU NVMe Ctrl (12341 ): 13985 I/Os completed (+1675) 00:11:55.016 00:11:55.952 QEMU NVMe Ctrl (12340 ): 14361 I/Os completed (+1632) 00:11:55.952 QEMU NVMe Ctrl (12341 ): 15830 I/Os completed (+1845) 00:11:55.952 00:11:56.887 QEMU NVMe Ctrl (12340 ): 15901 I/Os completed (+1540) 00:11:56.887 QEMU NVMe Ctrl (12341 ): 17533 I/Os completed (+1703) 00:11:56.887 00:11:57.822 QEMU NVMe Ctrl (12340 ): 17467 I/Os completed (+1566) 00:11:57.822 QEMU NVMe Ctrl (12341 ): 19247 I/Os completed (+1714) 00:11:57.822 00:11:59.220 QEMU NVMe Ctrl (12340 ): 18783 I/Os completed (+1316) 00:11:59.220 QEMU NVMe Ctrl (12341 ): 20683 I/Os completed (+1436) 00:11:59.220 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.220 [2024-11-26 18:56:30.072991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:59.220 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:59.220 [2024-11-26 18:56:30.076332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.076430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.076476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.076523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:59.220 [2024-11-26 18:56:30.081255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.081364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.081405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.081441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.220 [2024-11-26 18:56:30.102445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:59.220 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:59.220 [2024-11-26 18:56:30.105566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.105675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.105727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.105767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:59.220 [2024-11-26 18:56:30.110070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.110156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.110234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 [2024-11-26 18:56:30.110273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:59.220 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:59.220 EAL: Scan for (pci) bus failed. 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.220 Attaching to 0000:00:10.0 00:11:59.220 Attached to 0000:00:10.0 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.220 18:56:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:59.478 Attaching to 0000:00:11.0 00:11:59.478 Attached to 0000:00:11.0 00:12:00.045 QEMU NVMe Ctrl (12340 ): 1066 I/Os completed (+1066) 00:12:00.045 QEMU NVMe Ctrl (12341 ): 1176 I/Os completed (+1176) 00:12:00.045 00:12:00.980 QEMU NVMe Ctrl (12340 ): 2670 I/Os completed (+1604) 00:12:00.980 QEMU NVMe Ctrl (12341 ): 3051 I/Os completed (+1875) 00:12:00.980 00:12:01.913 QEMU NVMe Ctrl (12340 ): 4211 I/Os completed (+1541) 00:12:01.914 QEMU NVMe Ctrl (12341 ): 4797 I/Os completed (+1746) 00:12:01.914 00:12:02.847 QEMU NVMe Ctrl (12340 ): 5957 I/Os completed (+1746) 00:12:02.847 QEMU NVMe Ctrl (12341 ): 6747 I/Os completed (+1950) 00:12:02.847 00:12:04.219 QEMU NVMe Ctrl (12340 ): 7461 I/Os completed (+1504) 00:12:04.219 QEMU NVMe Ctrl (12341 ): 8509 I/Os completed (+1762) 00:12:04.219 00:12:05.170 QEMU NVMe Ctrl (12340 ): 9151 I/Os completed (+1690) 00:12:05.170 QEMU NVMe Ctrl (12341 ): 10390 I/Os completed (+1881) 00:12:05.170 00:12:06.115 QEMU NVMe Ctrl (12340 ): 10643 I/Os completed (+1492) 00:12:06.115 QEMU NVMe Ctrl (12341 ): 12058 I/Os completed (+1668) 00:12:06.115 00:12:07.049 QEMU NVMe Ctrl (12340 ): 12113 I/Os completed (+1470) 00:12:07.049 QEMU NVMe Ctrl (12341 ): 13869 I/Os completed (+1811) 00:12:07.049 00:12:07.982 QEMU NVMe Ctrl (12340 ): 13642 I/Os completed (+1529) 00:12:07.982 QEMU NVMe Ctrl (12341 ): 15685 I/Os completed (+1816) 00:12:07.982 00:12:08.916 QEMU NVMe Ctrl (12340 ): 15143 I/Os completed (+1501) 00:12:08.916 QEMU NVMe Ctrl (12341 ): 17375 I/Os completed (+1690) 00:12:08.916 00:12:09.850 QEMU NVMe Ctrl (12340 ): 16791 I/Os completed (+1648) 00:12:09.850 QEMU NVMe Ctrl (12341 ): 19145 I/Os completed (+1770) 00:12:09.850 00:12:11.226 QEMU NVMe Ctrl (12340 ): 18386 I/Os completed (+1595) 00:12:11.226 QEMU NVMe Ctrl (12341 ): 20947 I/Os completed (+1802) 00:12:11.226 00:12:11.226 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:11.226 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.226 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.226 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.226 [2024-11-26 18:56:42.436410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:11.226 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:11.226 [2024-11-26 18:56:42.439046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.226 [2024-11-26 18:56:42.439127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.226 [2024-11-26 18:56:42.439161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.226 [2024-11-26 18:56:42.439215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:11.485 [2024-11-26 18:56:42.445297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.445376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.445406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.445436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.485 [2024-11-26 18:56:42.464878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:11.485 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:11.485 [2024-11-26 18:56:42.468668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.468793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.468846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.468888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:11.485 [2024-11-26 18:56:42.474006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.474103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.474156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 [2024-11-26 18:56:42.474217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.485 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:11.485 EAL: Scan for (pci) bus failed. 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.485 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.485 Attaching to 0000:00:10.0 00:12:11.485 Attached to 0000:00:10.0 00:12:11.743 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.743 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.743 18:56:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.743 Attaching to 0000:00:11.0 00:12:11.743 Attached to 0000:00:11.0 00:12:11.743 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:11.743 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:11.743 [2024-11-26 18:56:42.788562] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:23.944 18:56:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:23.944 18:56:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:23.944 18:56:54 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.07 00:12:23.944 18:56:54 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.07 00:12:23.944 18:56:54 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:23.944 18:56:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.07 00:12:23.944 18:56:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.07 2 00:12:23.944 remove_attach_helper took 43.07s to complete (handling 2 nvme drive(s)) 18:56:54 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68478 00:12:30.515 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68478) - No such process 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68478 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69024 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:30.515 18:57:00 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69024 00:12:30.515 18:57:00 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69024 ']' 00:12:30.515 18:57:00 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.515 18:57:00 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.515 18:57:00 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.515 18:57:00 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.515 18:57:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.515 [2024-11-26 18:57:00.903361] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:12:30.515 [2024-11-26 18:57:00.903542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69024 ] 00:12:30.515 [2024-11-26 18:57:01.093934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.515 [2024-11-26 18:57:01.232836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:31.079 18:57:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:31.079 18:57:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.637 18:57:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.637 18:57:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.637 [2024-11-26 18:57:08.130044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:37.637 [2024-11-26 18:57:08.132922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.132998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.133031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 [2024-11-26 18:57:08.133064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.133081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.133098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 [2024-11-26 18:57:08.133114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.133130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.133143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 [2024-11-26 18:57:08.133164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.133202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.133220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 18:57:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:37.637 [2024-11-26 18:57:08.530046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:37.637 [2024-11-26 18:57:08.533006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.533060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.533085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 [2024-11-26 18:57:08.533114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.533133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.533148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 [2024-11-26 18:57:08.533165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.533197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.533215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 [2024-11-26 18:57:08.533231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.637 [2024-11-26 18:57:08.533247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.637 [2024-11-26 18:57:08.533261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:37.637 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.638 18:57:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.638 18:57:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.638 18:57:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.638 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:37.896 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:37.896 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.896 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.896 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.896 18:57:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:37.896 18:57:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:37.896 18:57:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.896 18:57:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.104 18:57:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.104 18:57:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 18:57:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.104 [2024-11-26 18:57:21.130289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:50.104 [2024-11-26 18:57:21.133414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.104 [2024-11-26 18:57:21.133471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.104 [2024-11-26 18:57:21.133494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.104 [2024-11-26 18:57:21.133531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.104 [2024-11-26 18:57:21.133547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.104 [2024-11-26 18:57:21.133563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.104 [2024-11-26 18:57:21.133579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.104 [2024-11-26 18:57:21.133595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.104 [2024-11-26 18:57:21.133609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.104 [2024-11-26 18:57:21.133626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.104 [2024-11-26 18:57:21.133639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.104 [2024-11-26 18:57:21.133655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.104 18:57:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.104 18:57:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 18:57:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:50.104 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:50.670 [2024-11-26 18:57:21.630295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:50.670 [2024-11-26 18:57:21.633250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.670 [2024-11-26 18:57:21.633301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.670 [2024-11-26 18:57:21.633329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.670 [2024-11-26 18:57:21.633357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.670 [2024-11-26 18:57:21.633375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.670 [2024-11-26 18:57:21.633390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.671 [2024-11-26 18:57:21.633407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.671 [2024-11-26 18:57:21.633421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.671 [2024-11-26 18:57:21.633436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.671 [2024-11-26 18:57:21.633451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.671 [2024-11-26 18:57:21.633466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.671 [2024-11-26 18:57:21.633480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.671 18:57:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.671 18:57:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.671 18:57:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:50.671 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:50.929 18:57:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:50.929 18:57:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:50.929 18:57:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:50.929 18:57:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.165 18:57:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.165 18:57:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 18:57:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.165 18:57:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.165 18:57:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 18:57:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.165 [2024-11-26 18:57:34.230568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:03.165 [2024-11-26 18:57:34.233623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.165 [2024-11-26 18:57:34.233686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.165 [2024-11-26 18:57:34.233711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.165 [2024-11-26 18:57:34.233746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.165 [2024-11-26 18:57:34.233763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.165 [2024-11-26 18:57:34.233787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.165 [2024-11-26 18:57:34.233804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.165 [2024-11-26 18:57:34.233824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.165 [2024-11-26 18:57:34.233839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.165 [2024-11-26 18:57:34.233859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.165 [2024-11-26 18:57:34.233874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.165 [2024-11-26 18:57:34.233893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:03.165 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:03.424 [2024-11-26 18:57:34.630636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:03.424 [2024-11-26 18:57:34.633928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.424 [2024-11-26 18:57:34.633979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.424 [2024-11-26 18:57:34.634010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.424 [2024-11-26 18:57:34.634042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.424 [2024-11-26 18:57:34.634064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.424 [2024-11-26 18:57:34.634079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.424 [2024-11-26 18:57:34.634097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.424 [2024-11-26 18:57:34.634111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.424 [2024-11-26 18:57:34.634130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.424 [2024-11-26 18:57:34.634144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.424 [2024-11-26 18:57:34.634160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.424 [2024-11-26 18:57:34.634191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.683 18:57:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.683 18:57:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.683 18:57:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:03.683 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.943 18:57:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:03.943 18:57:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:03.943 18:57:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.943 18:57:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.09 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.09 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.09 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.09 2 00:13:16.172 remove_attach_helper took 45.09s to complete (handling 2 nvme drive(s)) 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:16.172 18:57:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:16.172 18:57:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.747 18:57:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.747 18:57:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.747 18:57:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.747 [2024-11-26 18:57:53.254401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:22.747 [2024-11-26 18:57:53.256456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.747 [2024-11-26 18:57:53.256539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.747 [2024-11-26 18:57:53.256569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.747 [2024-11-26 18:57:53.256601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.747 [2024-11-26 18:57:53.256618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.747 [2024-11-26 18:57:53.256634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.747 [2024-11-26 18:57:53.256650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.747 [2024-11-26 18:57:53.256666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.747 [2024-11-26 18:57:53.256680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.747 [2024-11-26 18:57:53.256698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.747 [2024-11-26 18:57:53.256712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.747 [2024-11-26 18:57:53.256731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:22.747 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:22.747 [2024-11-26 18:57:53.654403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:22.747 [2024-11-26 18:57:53.657161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.747 [2024-11-26 18:57:53.657230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.747 [2024-11-26 18:57:53.657257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.748 [2024-11-26 18:57:53.657286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.748 [2024-11-26 18:57:53.657307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.748 [2024-11-26 18:57:53.657322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.748 [2024-11-26 18:57:53.657340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.748 [2024-11-26 18:57:53.657354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.748 [2024-11-26 18:57:53.657370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.748 [2024-11-26 18:57:53.657384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.748 [2024-11-26 18:57:53.657401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.748 [2024-11-26 18:57:53.657415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.748 18:57:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.748 18:57:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.748 18:57:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.748 18:57:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:23.006 18:57:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.201 18:58:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.201 18:58:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.201 18:58:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.201 18:58:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.201 18:58:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.201 [2024-11-26 18:58:06.254630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:35.201 18:58:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.201 [2024-11-26 18:58:06.257702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.201 [2024-11-26 18:58:06.257770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.201 [2024-11-26 18:58:06.257793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.201 [2024-11-26 18:58:06.257824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.201 [2024-11-26 18:58:06.257841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.201 [2024-11-26 18:58:06.257858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.201 [2024-11-26 18:58:06.257874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.201 [2024-11-26 18:58:06.257890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.201 [2024-11-26 18:58:06.257904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.201 [2024-11-26 18:58:06.257922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.201 [2024-11-26 18:58:06.257936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.201 [2024-11-26 18:58:06.257952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:35.201 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:35.459 [2024-11-26 18:58:06.654619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:35.459 [2024-11-26 18:58:06.657588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.459 [2024-11-26 18:58:06.657644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.459 [2024-11-26 18:58:06.657671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.459 [2024-11-26 18:58:06.657699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.459 [2024-11-26 18:58:06.657720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.459 [2024-11-26 18:58:06.657735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.460 [2024-11-26 18:58:06.657753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.460 [2024-11-26 18:58:06.657766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.460 [2024-11-26 18:58:06.657782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.460 [2024-11-26 18:58:06.657798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.460 [2024-11-26 18:58:06.657814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.460 [2024-11-26 18:58:06.657828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.717 18:58:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.717 18:58:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.717 18:58:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.717 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:35.975 18:58:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.975 18:58:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.169 18:58:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.169 18:58:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 18:58:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.169 18:58:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.169 18:58:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 [2024-11-26 18:58:19.254877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:48.169 [2024-11-26 18:58:19.257834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.169 [2024-11-26 18:58:19.257893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.169 [2024-11-26 18:58:19.257917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.169 [2024-11-26 18:58:19.257948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.169 [2024-11-26 18:58:19.257964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.169 [2024-11-26 18:58:19.257981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.169 [2024-11-26 18:58:19.257997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.169 [2024-11-26 18:58:19.258016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.169 [2024-11-26 18:58:19.258030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.169 [2024-11-26 18:58:19.258047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.169 [2024-11-26 18:58:19.258060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.169 [2024-11-26 18:58:19.258076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.169 18:58:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:48.169 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.804 18:58:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.804 18:58:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.804 18:58:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.804 [2024-11-26 18:58:19.854871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:48.804 18:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:48.804 [2024-11-26 18:58:19.856942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.804 [2024-11-26 18:58:19.856994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.804 [2024-11-26 18:58:19.857020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.804 [2024-11-26 18:58:19.857049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.804 [2024-11-26 18:58:19.857073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.804 [2024-11-26 18:58:19.857088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.804 [2024-11-26 18:58:19.857105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.804 [2024-11-26 18:58:19.857120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.804 [2024-11-26 18:58:19.857136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.804 [2024-11-26 18:58:19.857150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.804 [2024-11-26 18:58:19.857187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.804 [2024-11-26 18:58:19.857205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.397 18:58:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.397 18:58:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.397 18:58:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.397 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:49.655 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:49.655 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.655 18:58:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.59 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.59 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.59 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.59 2 00:14:01.858 remove_attach_helper took 45.59s to complete (handling 2 nvme drive(s)) 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:01.858 18:58:32 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69024 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69024 ']' 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69024 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69024 00:14:01.858 killing process with pid 69024 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69024' 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69024 00:14:01.858 18:58:32 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69024 00:14:03.758 18:58:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:04.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:04.633 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:04.633 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:04.633 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.633 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.892 00:14:04.892 real 2m31.887s 00:14:04.892 user 1m51.706s 00:14:04.892 sys 0m20.083s 00:14:04.892 18:58:35 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.892 ************************************ 00:14:04.892 END TEST sw_hotplug 00:14:04.892 ************************************ 00:14:04.892 18:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.892 18:58:35 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:04.892 18:58:35 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:04.892 18:58:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.892 18:58:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.892 18:58:35 -- common/autotest_common.sh@10 -- # set +x 00:14:04.892 ************************************ 00:14:04.892 START TEST nvme_xnvme 00:14:04.892 ************************************ 00:14:04.892 18:58:35 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:04.892 * Looking for test storage... 00:14:04.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:04.892 18:58:35 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:04.892 18:58:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:04.892 18:58:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:04.892 18:58:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:04.892 18:58:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:04.893 18:58:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.893 18:58:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:04.893 18:58:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.893 18:58:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.893 18:58:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.893 18:58:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:04.893 18:58:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.893 18:58:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:04.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.893 --rc genhtml_branch_coverage=1 00:14:04.893 --rc genhtml_function_coverage=1 00:14:04.893 --rc genhtml_legend=1 00:14:04.893 --rc geninfo_all_blocks=1 00:14:04.893 --rc geninfo_unexecuted_blocks=1 00:14:04.893 00:14:04.893 ' 00:14:04.893 18:58:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:04.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.893 --rc genhtml_branch_coverage=1 00:14:04.893 --rc genhtml_function_coverage=1 00:14:04.893 --rc genhtml_legend=1 00:14:04.893 --rc geninfo_all_blocks=1 00:14:04.893 --rc geninfo_unexecuted_blocks=1 00:14:04.893 00:14:04.893 ' 00:14:04.893 18:58:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:04.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.893 --rc genhtml_branch_coverage=1 00:14:04.893 --rc genhtml_function_coverage=1 00:14:04.893 --rc genhtml_legend=1 00:14:04.893 --rc geninfo_all_blocks=1 00:14:04.893 --rc geninfo_unexecuted_blocks=1 00:14:04.893 00:14:04.893 ' 00:14:04.893 18:58:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:04.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.893 --rc genhtml_branch_coverage=1 00:14:04.893 --rc genhtml_function_coverage=1 00:14:04.893 --rc genhtml_legend=1 00:14:04.893 --rc geninfo_all_blocks=1 00:14:04.893 --rc geninfo_unexecuted_blocks=1 00:14:04.893 00:14:04.893 ' 00:14:05.154 18:58:36 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:05.154 18:58:36 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:05.154 18:58:36 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:05.154 18:58:36 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:05.155 18:58:36 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:05.155 18:58:36 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:05.155 18:58:36 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:05.155 #define SPDK_CONFIG_H 00:14:05.155 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:05.155 #define SPDK_CONFIG_APPS 1 00:14:05.155 #define SPDK_CONFIG_ARCH native 00:14:05.155 #define SPDK_CONFIG_ASAN 1 00:14:05.155 #undef SPDK_CONFIG_AVAHI 00:14:05.155 #undef SPDK_CONFIG_CET 00:14:05.155 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:05.155 #define SPDK_CONFIG_COVERAGE 1 00:14:05.155 #define SPDK_CONFIG_CROSS_PREFIX 00:14:05.155 #undef SPDK_CONFIG_CRYPTO 00:14:05.155 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:05.155 #undef SPDK_CONFIG_CUSTOMOCF 00:14:05.155 #undef SPDK_CONFIG_DAOS 00:14:05.155 #define SPDK_CONFIG_DAOS_DIR 00:14:05.155 #define SPDK_CONFIG_DEBUG 1 00:14:05.155 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:05.155 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:05.155 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:05.155 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:05.155 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:05.155 #undef SPDK_CONFIG_DPDK_UADK 00:14:05.155 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:05.155 #define SPDK_CONFIG_EXAMPLES 1 00:14:05.155 #undef SPDK_CONFIG_FC 00:14:05.155 #define SPDK_CONFIG_FC_PATH 00:14:05.155 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:05.155 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:05.155 #define SPDK_CONFIG_FSDEV 1 00:14:05.155 #undef SPDK_CONFIG_FUSE 00:14:05.155 #undef SPDK_CONFIG_FUZZER 00:14:05.155 #define SPDK_CONFIG_FUZZER_LIB 00:14:05.155 #undef SPDK_CONFIG_GOLANG 00:14:05.155 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:05.155 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:05.155 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:05.155 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:05.155 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:05.155 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:05.155 #undef SPDK_CONFIG_HAVE_LZ4 00:14:05.155 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:05.155 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:05.155 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:05.155 #define SPDK_CONFIG_IDXD 1 00:14:05.155 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:05.155 #undef SPDK_CONFIG_IPSEC_MB 00:14:05.155 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:05.155 #define SPDK_CONFIG_ISAL 1 00:14:05.155 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:05.155 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:05.155 #define SPDK_CONFIG_LIBDIR 00:14:05.155 #undef SPDK_CONFIG_LTO 00:14:05.155 #define SPDK_CONFIG_MAX_LCORES 128 00:14:05.155 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:05.155 #define SPDK_CONFIG_NVME_CUSE 1 00:14:05.155 #undef SPDK_CONFIG_OCF 00:14:05.155 #define SPDK_CONFIG_OCF_PATH 00:14:05.155 #define SPDK_CONFIG_OPENSSL_PATH 00:14:05.155 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:05.155 #define SPDK_CONFIG_PGO_DIR 00:14:05.155 #undef SPDK_CONFIG_PGO_USE 00:14:05.155 #define SPDK_CONFIG_PREFIX /usr/local 00:14:05.155 #undef SPDK_CONFIG_RAID5F 00:14:05.156 #undef SPDK_CONFIG_RBD 00:14:05.156 #define SPDK_CONFIG_RDMA 1 00:14:05.156 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:05.156 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:05.156 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:05.156 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:05.156 #define SPDK_CONFIG_SHARED 1 00:14:05.156 #undef SPDK_CONFIG_SMA 00:14:05.156 #define SPDK_CONFIG_TESTS 1 00:14:05.156 #undef SPDK_CONFIG_TSAN 00:14:05.156 #define SPDK_CONFIG_UBLK 1 00:14:05.156 #define SPDK_CONFIG_UBSAN 1 00:14:05.156 #undef SPDK_CONFIG_UNIT_TESTS 00:14:05.156 #undef SPDK_CONFIG_URING 00:14:05.156 #define SPDK_CONFIG_URING_PATH 00:14:05.156 #undef SPDK_CONFIG_URING_ZNS 00:14:05.156 #undef SPDK_CONFIG_USDT 00:14:05.156 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:05.156 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:05.156 #undef SPDK_CONFIG_VFIO_USER 00:14:05.156 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:05.156 #define SPDK_CONFIG_VHOST 1 00:14:05.156 #define SPDK_CONFIG_VIRTIO 1 00:14:05.156 #undef SPDK_CONFIG_VTUNE 00:14:05.156 #define SPDK_CONFIG_VTUNE_DIR 00:14:05.156 #define SPDK_CONFIG_WERROR 1 00:14:05.156 #define SPDK_CONFIG_WPDK_DIR 00:14:05.156 #define SPDK_CONFIG_XNVME 1 00:14:05.156 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:05.156 18:58:36 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.156 18:58:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.156 18:58:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.156 18:58:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.156 18:58:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.156 18:58:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.156 18:58:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.156 18:58:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.156 18:58:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:05.156 18:58:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:05.156 18:58:36 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:05.156 18:58:36 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70367 ]] 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70367 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:05.157 18:58:36 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.R8HZyp 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.R8HZyp/tests/xnvme /tmp/spdk.R8HZyp 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974933504 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593214976 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974933504 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593214976 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95274602496 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4428177408 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:05.158 * Looking for test storage... 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974933504 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.158 18:58:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:05.158 18:58:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:05.159 18:58:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.159 18:58:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.159 --rc genhtml_branch_coverage=1 00:14:05.159 --rc genhtml_function_coverage=1 00:14:05.159 --rc genhtml_legend=1 00:14:05.159 --rc geninfo_all_blocks=1 00:14:05.159 --rc geninfo_unexecuted_blocks=1 00:14:05.159 00:14:05.159 ' 00:14:05.159 18:58:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.159 --rc genhtml_branch_coverage=1 00:14:05.159 --rc genhtml_function_coverage=1 00:14:05.159 --rc genhtml_legend=1 00:14:05.159 --rc geninfo_all_blocks=1 00:14:05.159 --rc geninfo_unexecuted_blocks=1 00:14:05.159 00:14:05.159 ' 00:14:05.159 18:58:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.159 --rc genhtml_branch_coverage=1 00:14:05.159 --rc genhtml_function_coverage=1 00:14:05.159 --rc genhtml_legend=1 00:14:05.159 --rc geninfo_all_blocks=1 00:14:05.159 --rc geninfo_unexecuted_blocks=1 00:14:05.159 00:14:05.159 ' 00:14:05.159 18:58:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.159 --rc genhtml_branch_coverage=1 00:14:05.159 --rc genhtml_function_coverage=1 00:14:05.159 --rc genhtml_legend=1 00:14:05.159 --rc geninfo_all_blocks=1 00:14:05.159 --rc geninfo_unexecuted_blocks=1 00:14:05.159 00:14:05.159 ' 00:14:05.159 18:58:36 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.159 18:58:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.159 18:58:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.159 18:58:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.159 18:58:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.159 18:58:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:05.159 18:58:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:05.159 18:58:36 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:05.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:05.726 Waiting for block devices as requested 00:14:05.726 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:05.985 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:05.985 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:05.985 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:11.296 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:11.296 18:58:42 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:11.553 18:58:42 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:11.553 18:58:42 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:11.812 18:58:42 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:11.812 18:58:42 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:11.812 No valid GPT data, bailing 00:14:11.812 18:58:42 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:11.812 18:58:42 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:11.812 18:58:42 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:11.812 18:58:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:11.812 18:58:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.812 18:58:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.812 18:58:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.812 ************************************ 00:14:11.812 START TEST xnvme_rpc 00:14:11.812 ************************************ 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70757 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70757 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70757 ']' 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.812 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.813 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.813 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.813 18:58:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.813 [2024-11-26 18:58:43.022312] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:14:11.813 [2024-11-26 18:58:43.022814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70757 ] 00:14:12.071 [2024-11-26 18:58:43.219236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.330 [2024-11-26 18:58:43.322675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.266 xnvme_bdev 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:13.266 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70757 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70757 ']' 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70757 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70757 00:14:13.267 killing process with pid 70757 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70757' 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70757 00:14:13.267 18:58:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70757 00:14:15.798 ************************************ 00:14:15.798 END TEST xnvme_rpc 00:14:15.798 ************************************ 00:14:15.798 00:14:15.798 real 0m3.615s 00:14:15.798 user 0m3.853s 00:14:15.798 sys 0m0.465s 00:14:15.798 18:58:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.798 18:58:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.798 18:58:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:15.798 18:58:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.798 18:58:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.798 18:58:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.798 ************************************ 00:14:15.798 START TEST xnvme_bdevperf 00:14:15.798 ************************************ 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:15.798 18:58:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:15.798 { 00:14:15.798 "subsystems": [ 00:14:15.798 { 00:14:15.798 "subsystem": "bdev", 00:14:15.798 "config": [ 00:14:15.798 { 00:14:15.798 "params": { 00:14:15.798 "io_mechanism": "libaio", 00:14:15.798 "conserve_cpu": false, 00:14:15.798 "filename": "/dev/nvme0n1", 00:14:15.798 "name": "xnvme_bdev" 00:14:15.798 }, 00:14:15.798 "method": "bdev_xnvme_create" 00:14:15.798 }, 00:14:15.798 { 00:14:15.798 "method": "bdev_wait_for_examine" 00:14:15.798 } 00:14:15.798 ] 00:14:15.798 } 00:14:15.798 ] 00:14:15.798 } 00:14:15.798 [2024-11-26 18:58:46.662474] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:14:15.798 [2024-11-26 18:58:46.662648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70837 ] 00:14:15.798 [2024-11-26 18:58:46.849617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.798 [2024-11-26 18:58:47.007574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.374 Running I/O for 5 seconds... 00:14:18.244 24601.00 IOPS, 96.10 MiB/s [2024-11-26T18:58:50.834Z] 24921.00 IOPS, 97.35 MiB/s [2024-11-26T18:58:51.769Z] 25226.00 IOPS, 98.54 MiB/s [2024-11-26T18:58:52.705Z] 25426.50 IOPS, 99.32 MiB/s 00:14:21.490 Latency(us) 00:14:21.490 [2024-11-26T18:58:52.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.490 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:21.490 xnvme_bdev : 5.00 25448.46 99.41 0.00 0.00 2508.73 232.73 8043.05 00:14:21.490 [2024-11-26T18:58:52.705Z] =================================================================================================================== 00:14:21.490 [2024-11-26T18:58:52.705Z] Total : 25448.46 99.41 0.00 0.00 2508.73 232.73 8043.05 00:14:22.459 18:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:22.460 18:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:22.460 18:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:22.460 18:58:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:22.460 18:58:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:22.460 { 00:14:22.460 "subsystems": [ 00:14:22.460 { 00:14:22.460 "subsystem": "bdev", 00:14:22.460 "config": [ 00:14:22.460 { 00:14:22.460 "params": { 00:14:22.460 "io_mechanism": "libaio", 00:14:22.460 "conserve_cpu": false, 00:14:22.460 "filename": "/dev/nvme0n1", 00:14:22.460 "name": "xnvme_bdev" 00:14:22.460 }, 00:14:22.460 "method": "bdev_xnvme_create" 00:14:22.460 }, 00:14:22.460 { 00:14:22.460 "method": "bdev_wait_for_examine" 00:14:22.460 } 00:14:22.460 ] 00:14:22.460 } 00:14:22.460 ] 00:14:22.460 } 00:14:22.460 [2024-11-26 18:58:53.522736] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:14:22.460 [2024-11-26 18:58:53.522943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70912 ] 00:14:22.718 [2024-11-26 18:58:53.712156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.718 [2024-11-26 18:58:53.854288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.285 Running I/O for 5 seconds... 00:14:25.149 24862.00 IOPS, 97.12 MiB/s [2024-11-26T18:58:57.294Z] 26378.00 IOPS, 103.04 MiB/s [2024-11-26T18:58:58.226Z] 26736.67 IOPS, 104.44 MiB/s [2024-11-26T18:58:59.598Z] 26628.00 IOPS, 104.02 MiB/s [2024-11-26T18:58:59.598Z] 26434.60 IOPS, 103.26 MiB/s 00:14:28.383 Latency(us) 00:14:28.383 [2024-11-26T18:58:59.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.383 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:28.383 xnvme_bdev : 5.01 26406.71 103.15 0.00 0.00 2417.11 558.55 6940.86 00:14:28.383 [2024-11-26T18:58:59.598Z] =================================================================================================================== 00:14:28.383 [2024-11-26T18:58:59.598Z] Total : 26406.71 103.15 0.00 0.00 2417.11 558.55 6940.86 00:14:29.318 00:14:29.318 real 0m13.736s 00:14:29.318 user 0m5.314s 00:14:29.318 sys 0m5.958s 00:14:29.318 18:59:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.318 18:59:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.318 ************************************ 00:14:29.318 END TEST xnvme_bdevperf 00:14:29.318 ************************************ 00:14:29.318 18:59:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:29.318 18:59:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.318 18:59:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.318 18:59:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.318 ************************************ 00:14:29.318 START TEST xnvme_fio_plugin 00:14:29.318 ************************************ 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:29.318 18:59:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:29.318 { 00:14:29.318 "subsystems": [ 00:14:29.318 { 00:14:29.318 "subsystem": "bdev", 00:14:29.318 "config": [ 00:14:29.318 { 00:14:29.318 "params": { 00:14:29.318 "io_mechanism": "libaio", 00:14:29.318 "conserve_cpu": false, 00:14:29.318 "filename": "/dev/nvme0n1", 00:14:29.318 "name": "xnvme_bdev" 00:14:29.318 }, 00:14:29.318 "method": "bdev_xnvme_create" 00:14:29.318 }, 00:14:29.318 { 00:14:29.318 "method": "bdev_wait_for_examine" 00:14:29.318 } 00:14:29.318 ] 00:14:29.318 } 00:14:29.318 ] 00:14:29.318 } 00:14:29.577 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:29.577 fio-3.35 00:14:29.577 Starting 1 thread 00:14:36.135 00:14:36.135 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71038: Tue Nov 26 18:59:06 2024 00:14:36.135 read: IOPS=26.1k, BW=102MiB/s (107MB/s)(511MiB/5001msec) 00:14:36.135 slat (usec): min=5, max=966, avg=33.99, stdev=28.99 00:14:36.135 clat (usec): min=115, max=5073, avg=1350.77, stdev=774.93 00:14:36.135 lat (usec): min=152, max=5178, avg=1384.76, stdev=778.49 00:14:36.135 clat percentiles (usec): 00:14:36.135 | 1.00th=[ 225], 5.00th=[ 338], 10.00th=[ 445], 20.00th=[ 652], 00:14:36.135 | 30.00th=[ 840], 40.00th=[ 1020], 50.00th=[ 1221], 60.00th=[ 1434], 00:14:36.135 | 70.00th=[ 1680], 80.00th=[ 1991], 90.00th=[ 2442], 95.00th=[ 2835], 00:14:36.135 | 99.00th=[ 3523], 99.50th=[ 3785], 99.90th=[ 4293], 99.95th=[ 4490], 00:14:36.135 | 99.99th=[ 4752] 00:14:36.135 bw ( KiB/s): min=88952, max=115160, per=99.29%, avg=103860.56, stdev=9089.50, samples=9 00:14:36.135 iops : min=22238, max=28790, avg=25965.11, stdev=2272.35, samples=9 00:14:36.135 lat (usec) : 250=1.70%, 500=10.89%, 750=12.84%, 1000=13.32% 00:14:36.135 lat (msec) : 2=41.35%, 4=19.64%, 10=0.26% 00:14:36.135 cpu : usr=25.40%, sys=53.04%, ctx=107, majf=0, minf=764 00:14:36.135 IO depths : 1=0.2%, 2=1.7%, 4=5.2%, 8=11.9%, 16=25.8%, 32=53.6%, >=64=1.7% 00:14:36.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.135 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:36.135 issued rwts: total=130775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.135 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:36.135 00:14:36.135 Run status group 0 (all jobs): 00:14:36.135 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=511MiB (536MB), run=5001-5001msec 00:14:36.394 ----------------------------------------------------- 00:14:36.394 Suppressions used: 00:14:36.394 count bytes template 00:14:36.394 1 11 /usr/src/fio/parse.c 00:14:36.394 1 8 libtcmalloc_minimal.so 00:14:36.394 1 904 libcrypto.so 00:14:36.394 ----------------------------------------------------- 00:14:36.394 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:36.394 18:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:36.653 { 00:14:36.653 "subsystems": [ 00:14:36.653 { 00:14:36.653 "subsystem": "bdev", 00:14:36.653 "config": [ 00:14:36.653 { 00:14:36.653 "params": { 00:14:36.653 "io_mechanism": "libaio", 00:14:36.653 "conserve_cpu": false, 00:14:36.653 "filename": "/dev/nvme0n1", 00:14:36.653 "name": "xnvme_bdev" 00:14:36.653 }, 00:14:36.653 "method": "bdev_xnvme_create" 00:14:36.653 }, 00:14:36.653 { 00:14:36.653 "method": "bdev_wait_for_examine" 00:14:36.653 } 00:14:36.653 ] 00:14:36.653 } 00:14:36.653 ] 00:14:36.653 } 00:14:36.653 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:36.653 fio-3.35 00:14:36.653 Starting 1 thread 00:14:43.214 00:14:43.214 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71130: Tue Nov 26 18:59:13 2024 00:14:43.214 write: IOPS=26.6k, BW=104MiB/s (109MB/s)(520MiB/5001msec); 0 zone resets 00:14:43.214 slat (usec): min=5, max=3570, avg=32.98, stdev=34.96 00:14:43.214 clat (usec): min=70, max=6342, avg=1356.92, stdev=831.96 00:14:43.214 lat (usec): min=152, max=6385, avg=1389.91, stdev=837.34 00:14:43.214 clat percentiles (usec): 00:14:43.214 | 1.00th=[ 241], 5.00th=[ 371], 10.00th=[ 474], 20.00th=[ 668], 00:14:43.214 | 30.00th=[ 824], 40.00th=[ 979], 50.00th=[ 1139], 60.00th=[ 1352], 00:14:43.214 | 70.00th=[ 1614], 80.00th=[ 2008], 90.00th=[ 2573], 95.00th=[ 3032], 00:14:43.214 | 99.00th=[ 3818], 99.50th=[ 4080], 99.90th=[ 4752], 99.95th=[ 5538], 00:14:43.214 | 99.99th=[ 6259] 00:14:43.214 bw ( KiB/s): min=86976, max=130384, per=100.00%, avg=107900.44, stdev=15993.21, samples=9 00:14:43.214 iops : min=21744, max=32596, avg=26975.11, stdev=3998.30, samples=9 00:14:43.214 lat (usec) : 100=0.01%, 250=1.22%, 500=10.04%, 750=13.98%, 1000=16.20% 00:14:43.214 lat (msec) : 2=38.58%, 4=19.33%, 10=0.65% 00:14:43.214 cpu : usr=26.18%, sys=52.62%, ctx=124, majf=0, minf=765 00:14:43.214 IO depths : 1=0.2%, 2=1.5%, 4=4.4%, 8=11.0%, 16=25.5%, 32=55.7%, >=64=1.8% 00:14:43.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.214 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:43.214 issued rwts: total=0,133239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:43.214 00:14:43.214 Run status group 0 (all jobs): 00:14:43.214 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=520MiB (546MB), run=5001-5001msec 00:14:43.782 ----------------------------------------------------- 00:14:43.782 Suppressions used: 00:14:43.782 count bytes template 00:14:43.782 1 11 /usr/src/fio/parse.c 00:14:43.782 1 8 libtcmalloc_minimal.so 00:14:43.782 1 904 libcrypto.so 00:14:43.782 ----------------------------------------------------- 00:14:43.782 00:14:43.782 ************************************ 00:14:43.782 END TEST xnvme_fio_plugin 00:14:43.782 ************************************ 00:14:43.782 00:14:43.782 real 0m14.642s 00:14:43.782 user 0m6.259s 00:14:43.782 sys 0m5.904s 00:14:43.782 18:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.782 18:59:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 18:59:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:44.040 18:59:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:44.040 18:59:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:44.040 18:59:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:44.040 18:59:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.040 18:59:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.040 18:59:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 ************************************ 00:14:44.040 START TEST xnvme_rpc 00:14:44.040 ************************************ 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:44.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71222 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71222 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71222 ']' 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.040 18:59:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 [2024-11-26 18:59:15.144842] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:14:44.040 [2024-11-26 18:59:15.145264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71222 ] 00:14:44.299 [2024-11-26 18:59:15.319005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.299 [2024-11-26 18:59:15.462754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.235 xnvme_bdev 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:45.235 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:45.236 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71222 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71222 ']' 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71222 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71222 00:14:45.495 killing process with pid 71222 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71222' 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71222 00:14:45.495 18:59:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71222 00:14:48.024 ************************************ 00:14:48.024 END TEST xnvme_rpc 00:14:48.024 ************************************ 00:14:48.024 00:14:48.024 real 0m3.727s 00:14:48.024 user 0m4.022s 00:14:48.024 sys 0m0.445s 00:14:48.024 18:59:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.024 18:59:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.024 18:59:18 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:48.024 18:59:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.024 18:59:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.024 18:59:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.024 ************************************ 00:14:48.024 START TEST xnvme_bdevperf 00:14:48.024 ************************************ 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.024 18:59:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.024 { 00:14:48.024 "subsystems": [ 00:14:48.024 { 00:14:48.024 "subsystem": "bdev", 00:14:48.024 "config": [ 00:14:48.024 { 00:14:48.024 "params": { 00:14:48.024 "io_mechanism": "libaio", 00:14:48.024 "conserve_cpu": true, 00:14:48.024 "filename": "/dev/nvme0n1", 00:14:48.024 "name": "xnvme_bdev" 00:14:48.024 }, 00:14:48.024 "method": "bdev_xnvme_create" 00:14:48.024 }, 00:14:48.024 { 00:14:48.024 "method": "bdev_wait_for_examine" 00:14:48.024 } 00:14:48.024 ] 00:14:48.024 } 00:14:48.024 ] 00:14:48.024 } 00:14:48.024 [2024-11-26 18:59:18.876516] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:14:48.024 [2024-11-26 18:59:18.876847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71297 ] 00:14:48.024 [2024-11-26 18:59:19.053270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.024 [2024-11-26 18:59:19.156127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.282 Running I/O for 5 seconds... 00:14:50.595 28313.00 IOPS, 110.60 MiB/s [2024-11-26T18:59:22.749Z] 25565.50 IOPS, 99.87 MiB/s [2024-11-26T18:59:23.758Z] 25116.33 IOPS, 98.11 MiB/s [2024-11-26T18:59:24.710Z] 25678.75 IOPS, 100.31 MiB/s [2024-11-26T18:59:24.710Z] 25650.60 IOPS, 100.20 MiB/s 00:14:53.495 Latency(us) 00:14:53.495 [2024-11-26T18:59:24.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.495 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:53.495 xnvme_bdev : 5.01 25626.58 100.10 0.00 0.00 2491.41 558.55 5838.66 00:14:53.495 [2024-11-26T18:59:24.711Z] =================================================================================================================== 00:14:53.496 [2024-11-26T18:59:24.711Z] Total : 25626.58 100.10 0.00 0.00 2491.41 558.55 5838.66 00:14:54.430 18:59:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.430 18:59:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:54.430 18:59:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.430 18:59:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.430 18:59:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.430 { 00:14:54.430 "subsystems": [ 00:14:54.430 { 00:14:54.430 "subsystem": "bdev", 00:14:54.430 "config": [ 00:14:54.430 { 00:14:54.430 "params": { 00:14:54.431 "io_mechanism": "libaio", 00:14:54.431 "conserve_cpu": true, 00:14:54.431 "filename": "/dev/nvme0n1", 00:14:54.431 "name": "xnvme_bdev" 00:14:54.431 }, 00:14:54.431 "method": "bdev_xnvme_create" 00:14:54.431 }, 00:14:54.431 { 00:14:54.431 "method": "bdev_wait_for_examine" 00:14:54.431 } 00:14:54.431 ] 00:14:54.431 } 00:14:54.431 ] 00:14:54.431 } 00:14:54.689 [2024-11-26 18:59:25.685057] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:14:54.689 [2024-11-26 18:59:25.685468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71378 ] 00:14:54.689 [2024-11-26 18:59:25.868163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.948 [2024-11-26 18:59:25.998487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.207 Running I/O for 5 seconds... 00:14:57.518 24292.00 IOPS, 94.89 MiB/s [2024-11-26T18:59:29.667Z] 24643.50 IOPS, 96.26 MiB/s [2024-11-26T18:59:30.625Z] 24036.67 IOPS, 93.89 MiB/s [2024-11-26T18:59:31.585Z] 23913.25 IOPS, 93.41 MiB/s [2024-11-26T18:59:31.585Z] 24523.40 IOPS, 95.79 MiB/s 00:15:00.370 Latency(us) 00:15:00.370 [2024-11-26T18:59:31.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.370 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:00.370 xnvme_bdev : 5.01 24507.42 95.73 0.00 0.00 2604.78 603.23 6136.55 00:15:00.370 [2024-11-26T18:59:31.585Z] =================================================================================================================== 00:15:00.370 [2024-11-26T18:59:31.585Z] Total : 24507.42 95.73 0.00 0.00 2604.78 603.23 6136.55 00:15:01.304 00:15:01.304 real 0m13.594s 00:15:01.304 user 0m5.137s 00:15:01.304 sys 0m5.944s 00:15:01.304 18:59:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.304 18:59:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.304 ************************************ 00:15:01.304 END TEST xnvme_bdevperf 00:15:01.304 ************************************ 00:15:01.304 18:59:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:01.304 18:59:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.304 18:59:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.304 18:59:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.304 ************************************ 00:15:01.304 START TEST xnvme_fio_plugin 00:15:01.304 ************************************ 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.304 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.305 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.305 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.305 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.305 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.305 18:59:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.305 { 00:15:01.305 "subsystems": [ 00:15:01.305 { 00:15:01.305 "subsystem": "bdev", 00:15:01.305 "config": [ 00:15:01.305 { 00:15:01.305 "params": { 00:15:01.305 "io_mechanism": "libaio", 00:15:01.305 "conserve_cpu": true, 00:15:01.305 "filename": "/dev/nvme0n1", 00:15:01.305 "name": "xnvme_bdev" 00:15:01.305 }, 00:15:01.305 "method": "bdev_xnvme_create" 00:15:01.305 }, 00:15:01.305 { 00:15:01.305 "method": "bdev_wait_for_examine" 00:15:01.305 } 00:15:01.305 ] 00:15:01.305 } 00:15:01.305 ] 00:15:01.305 } 00:15:01.562 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.562 fio-3.35 00:15:01.562 Starting 1 thread 00:15:08.122 00:15:08.122 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71503: Tue Nov 26 18:59:38 2024 00:15:08.122 read: IOPS=25.7k, BW=100MiB/s (105MB/s)(502MiB/5001msec) 00:15:08.122 slat (usec): min=5, max=713, avg=34.69, stdev=28.78 00:15:08.122 clat (usec): min=118, max=5691, avg=1372.81, stdev=772.70 00:15:08.122 lat (usec): min=165, max=5794, avg=1407.50, stdev=775.85 00:15:08.122 clat percentiles (usec): 00:15:08.122 | 1.00th=[ 233], 5.00th=[ 347], 10.00th=[ 457], 20.00th=[ 668], 00:15:08.122 | 30.00th=[ 865], 40.00th=[ 1057], 50.00th=[ 1254], 60.00th=[ 1467], 00:15:08.122 | 70.00th=[ 1713], 80.00th=[ 2024], 90.00th=[ 2442], 95.00th=[ 2769], 00:15:08.122 | 99.00th=[ 3556], 99.50th=[ 3851], 99.90th=[ 4424], 99.95th=[ 4686], 00:15:08.122 | 99.99th=[ 5145] 00:15:08.122 bw ( KiB/s): min=89563, max=113880, per=100.00%, avg=103558.00, stdev=8228.16, samples=9 00:15:08.122 iops : min=22390, max=28470, avg=25889.33, stdev=2057.27, samples=9 00:15:08.122 lat (usec) : 250=1.47%, 500=10.54%, 750=12.29%, 1000=12.88% 00:15:08.122 lat (msec) : 2=42.13%, 4=20.35%, 10=0.34% 00:15:08.122 cpu : usr=24.26%, sys=53.58%, ctx=88, majf=0, minf=664 00:15:08.122 IO depths : 1=0.2%, 2=1.6%, 4=5.2%, 8=12.0%, 16=25.8%, 32=53.6%, >=64=1.7% 00:15:08.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.122 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:08.122 issued rwts: total=128624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.122 00:15:08.122 Run status group 0 (all jobs): 00:15:08.122 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=502MiB (527MB), run=5001-5001msec 00:15:08.689 ----------------------------------------------------- 00:15:08.689 Suppressions used: 00:15:08.689 count bytes template 00:15:08.689 1 11 /usr/src/fio/parse.c 00:15:08.689 1 8 libtcmalloc_minimal.so 00:15:08.689 1 904 libcrypto.so 00:15:08.689 ----------------------------------------------------- 00:15:08.689 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.689 18:59:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.689 { 00:15:08.689 "subsystems": [ 00:15:08.689 { 00:15:08.689 "subsystem": "bdev", 00:15:08.689 "config": [ 00:15:08.689 { 00:15:08.689 "params": { 00:15:08.689 "io_mechanism": "libaio", 00:15:08.689 "conserve_cpu": true, 00:15:08.689 "filename": "/dev/nvme0n1", 00:15:08.689 "name": "xnvme_bdev" 00:15:08.689 }, 00:15:08.689 "method": "bdev_xnvme_create" 00:15:08.689 }, 00:15:08.689 { 00:15:08.689 "method": "bdev_wait_for_examine" 00:15:08.689 } 00:15:08.689 ] 00:15:08.689 } 00:15:08.689 ] 00:15:08.689 } 00:15:08.948 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:08.948 fio-3.35 00:15:08.948 Starting 1 thread 00:15:15.512 00:15:15.512 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71595: Tue Nov 26 18:59:45 2024 00:15:15.512 write: IOPS=24.8k, BW=96.9MiB/s (102MB/s)(485MiB/5001msec); 0 zone resets 00:15:15.512 slat (usec): min=5, max=831, avg=35.91, stdev=31.15 00:15:15.512 clat (usec): min=120, max=6055, avg=1438.60, stdev=806.69 00:15:15.512 lat (usec): min=158, max=6188, avg=1474.51, stdev=809.97 00:15:15.512 clat percentiles (usec): 00:15:15.512 | 1.00th=[ 243], 5.00th=[ 367], 10.00th=[ 498], 20.00th=[ 717], 00:15:15.512 | 30.00th=[ 914], 40.00th=[ 1106], 50.00th=[ 1287], 60.00th=[ 1516], 00:15:15.512 | 70.00th=[ 1778], 80.00th=[ 2147], 90.00th=[ 2573], 95.00th=[ 2900], 00:15:15.512 | 99.00th=[ 3720], 99.50th=[ 4080], 99.90th=[ 4686], 99.95th=[ 4883], 00:15:15.512 | 99.99th=[ 5538] 00:15:15.513 bw ( KiB/s): min=88200, max=126424, per=100.00%, avg=99936.00, stdev=11815.51, samples=9 00:15:15.513 iops : min=22050, max=31606, avg=24984.00, stdev=2953.88, samples=9 00:15:15.513 lat (usec) : 250=1.18%, 500=8.95%, 750=11.35%, 1000=13.02% 00:15:15.513 lat (msec) : 2=42.04%, 4=22.86%, 10=0.60% 00:15:15.513 cpu : usr=24.56%, sys=53.58%, ctx=53, majf=0, minf=765 00:15:15.513 IO depths : 1=0.1%, 2=1.6%, 4=5.3%, 8=11.9%, 16=25.5%, 32=53.8%, >=64=1.7% 00:15:15.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.513 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:15.513 issued rwts: total=0,124048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:15.513 00:15:15.513 Run status group 0 (all jobs): 00:15:15.513 WRITE: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=485MiB (508MB), run=5001-5001msec 00:15:16.079 ----------------------------------------------------- 00:15:16.079 Suppressions used: 00:15:16.079 count bytes template 00:15:16.079 1 11 /usr/src/fio/parse.c 00:15:16.079 1 8 libtcmalloc_minimal.so 00:15:16.079 1 904 libcrypto.so 00:15:16.079 ----------------------------------------------------- 00:15:16.079 00:15:16.079 00:15:16.079 real 0m14.800s 00:15:16.079 user 0m6.261s 00:15:16.079 sys 0m5.979s 00:15:16.079 18:59:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.079 18:59:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 ************************************ 00:15:16.079 END TEST xnvme_fio_plugin 00:15:16.079 ************************************ 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:16.079 18:59:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:16.079 18:59:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.079 18:59:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.079 18:59:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 ************************************ 00:15:16.079 START TEST xnvme_rpc 00:15:16.079 ************************************ 00:15:16.079 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:16.079 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:16.079 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:16.079 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:16.079 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:16.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.079 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71681 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71681 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71681 ']' 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.080 18:59:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.360 [2024-11-26 18:59:47.379003] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:15:16.360 [2024-11-26 18:59:47.379760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71681 ] 00:15:16.360 [2024-11-26 18:59:47.555523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.626 [2024-11-26 18:59:47.658480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.560 xnvme_bdev 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:17.560 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71681 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71681 ']' 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71681 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71681 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71681' 00:15:17.561 killing process with pid 71681 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71681 00:15:17.561 18:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71681 00:15:20.095 00:15:20.095 real 0m3.513s 00:15:20.095 user 0m3.853s 00:15:20.095 sys 0m0.404s 00:15:20.095 18:59:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.095 18:59:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.095 ************************************ 00:15:20.095 END TEST xnvme_rpc 00:15:20.095 ************************************ 00:15:20.095 18:59:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:20.095 18:59:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:20.095 18:59:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.095 18:59:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.095 ************************************ 00:15:20.095 START TEST xnvme_bdevperf 00:15:20.095 ************************************ 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:20.095 18:59:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.095 { 00:15:20.095 "subsystems": [ 00:15:20.095 { 00:15:20.095 "subsystem": "bdev", 00:15:20.095 "config": [ 00:15:20.095 { 00:15:20.095 "params": { 00:15:20.095 "io_mechanism": "io_uring", 00:15:20.095 "conserve_cpu": false, 00:15:20.095 "filename": "/dev/nvme0n1", 00:15:20.095 "name": "xnvme_bdev" 00:15:20.095 }, 00:15:20.095 "method": "bdev_xnvme_create" 00:15:20.095 }, 00:15:20.095 { 00:15:20.095 "method": "bdev_wait_for_examine" 00:15:20.095 } 00:15:20.095 ] 00:15:20.095 } 00:15:20.095 ] 00:15:20.095 } 00:15:20.095 [2024-11-26 18:59:50.930526] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:15:20.095 [2024-11-26 18:59:50.930687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71761 ] 00:15:20.095 [2024-11-26 18:59:51.115702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.095 [2024-11-26 18:59:51.244865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.354 Running I/O for 5 seconds... 00:15:22.664 47779.00 IOPS, 186.64 MiB/s [2024-11-26T18:59:54.814Z] 48012.00 IOPS, 187.55 MiB/s [2024-11-26T18:59:55.756Z] 48274.33 IOPS, 188.57 MiB/s [2024-11-26T18:59:56.731Z] 48803.75 IOPS, 190.64 MiB/s 00:15:25.516 Latency(us) 00:15:25.516 [2024-11-26T18:59:56.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.517 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:25.517 xnvme_bdev : 5.00 48252.73 188.49 0.00 0.00 1321.96 458.01 8519.68 00:15:25.517 [2024-11-26T18:59:56.732Z] =================================================================================================================== 00:15:25.517 [2024-11-26T18:59:56.732Z] Total : 48252.73 188.49 0.00 0.00 1321.96 458.01 8519.68 00:15:26.451 18:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:26.451 18:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:26.451 18:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:26.451 18:59:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:26.451 18:59:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:26.709 { 00:15:26.709 "subsystems": [ 00:15:26.709 { 00:15:26.709 "subsystem": "bdev", 00:15:26.709 "config": [ 00:15:26.709 { 00:15:26.709 "params": { 00:15:26.709 "io_mechanism": "io_uring", 00:15:26.709 "conserve_cpu": false, 00:15:26.709 "filename": "/dev/nvme0n1", 00:15:26.709 "name": "xnvme_bdev" 00:15:26.709 }, 00:15:26.709 "method": "bdev_xnvme_create" 00:15:26.709 }, 00:15:26.709 { 00:15:26.709 "method": "bdev_wait_for_examine" 00:15:26.709 } 00:15:26.709 ] 00:15:26.709 } 00:15:26.709 ] 00:15:26.709 } 00:15:26.709 [2024-11-26 18:59:57.743363] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:15:26.709 [2024-11-26 18:59:57.743541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71837 ] 00:15:26.967 [2024-11-26 18:59:57.926942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.967 [2024-11-26 18:59:58.058811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.225 Running I/O for 5 seconds... 00:15:29.533 44480.00 IOPS, 173.75 MiB/s [2024-11-26T19:00:01.682Z] 43616.00 IOPS, 170.38 MiB/s [2024-11-26T19:00:02.654Z] 43904.00 IOPS, 171.50 MiB/s [2024-11-26T19:00:03.590Z] 44016.00 IOPS, 171.94 MiB/s 00:15:32.375 Latency(us) 00:15:32.375 [2024-11-26T19:00:03.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.375 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:32.375 xnvme_bdev : 5.00 43928.26 171.59 0.00 0.00 1451.76 755.90 4230.05 00:15:32.375 [2024-11-26T19:00:03.590Z] =================================================================================================================== 00:15:32.375 [2024-11-26T19:00:03.590Z] Total : 43928.26 171.59 0.00 0.00 1451.76 755.90 4230.05 00:15:33.310 00:15:33.310 real 0m13.653s 00:15:33.310 user 0m7.222s 00:15:33.310 sys 0m6.203s 00:15:33.310 19:00:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.310 19:00:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 ************************************ 00:15:33.310 END TEST xnvme_bdevperf 00:15:33.310 ************************************ 00:15:33.568 19:00:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:33.568 19:00:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.568 19:00:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.568 19:00:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.568 ************************************ 00:15:33.568 START TEST xnvme_fio_plugin 00:15:33.568 ************************************ 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.568 19:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.568 { 00:15:33.568 "subsystems": [ 00:15:33.568 { 00:15:33.568 "subsystem": "bdev", 00:15:33.568 "config": [ 00:15:33.568 { 00:15:33.568 "params": { 00:15:33.568 "io_mechanism": "io_uring", 00:15:33.568 "conserve_cpu": false, 00:15:33.568 "filename": "/dev/nvme0n1", 00:15:33.568 "name": "xnvme_bdev" 00:15:33.568 }, 00:15:33.568 "method": "bdev_xnvme_create" 00:15:33.568 }, 00:15:33.568 { 00:15:33.568 "method": "bdev_wait_for_examine" 00:15:33.568 } 00:15:33.569 ] 00:15:33.569 } 00:15:33.569 ] 00:15:33.569 } 00:15:33.827 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:33.827 fio-3.35 00:15:33.827 Starting 1 thread 00:15:40.421 00:15:40.421 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71962: Tue Nov 26 19:00:10 2024 00:15:40.421 read: IOPS=47.1k, BW=184MiB/s (193MB/s)(921MiB/5001msec) 00:15:40.421 slat (nsec): min=3073, max=97761, avg=4088.82, stdev=1677.06 00:15:40.421 clat (usec): min=185, max=5509, avg=1196.29, stdev=281.76 00:15:40.421 lat (usec): min=192, max=5513, avg=1200.38, stdev=282.06 00:15:40.421 clat percentiles (usec): 00:15:40.421 | 1.00th=[ 840], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1037], 00:15:40.421 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:15:40.421 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[ 1532], 00:15:40.422 | 99.00th=[ 2278], 99.50th=[ 3130], 99.90th=[ 4293], 99.95th=[ 4621], 00:15:40.422 | 99.99th=[ 5014] 00:15:40.422 bw ( KiB/s): min=180736, max=200848, per=100.00%, avg=189583.11, stdev=6420.73, samples=9 00:15:40.422 iops : min=45184, max=50212, avg=47395.78, stdev=1605.18, samples=9 00:15:40.422 lat (usec) : 250=0.01%, 500=0.08%, 750=0.28%, 1000=11.41% 00:15:40.422 lat (msec) : 2=86.82%, 4=1.21%, 10=0.18% 00:15:40.422 cpu : usr=38.24%, sys=60.72%, ctx=10, majf=0, minf=762 00:15:40.422 IO depths : 1=1.3%, 2=2.7%, 4=5.6%, 8=12.1%, 16=25.3%, 32=51.3%, >=64=1.6% 00:15:40.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.422 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:40.422 issued rwts: total=235786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.422 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.422 00:15:40.422 Run status group 0 (all jobs): 00:15:40.422 READ: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=921MiB (966MB), run=5001-5001msec 00:15:40.679 ----------------------------------------------------- 00:15:40.679 Suppressions used: 00:15:40.679 count bytes template 00:15:40.679 1 11 /usr/src/fio/parse.c 00:15:40.679 1 8 libtcmalloc_minimal.so 00:15:40.679 1 904 libcrypto.so 00:15:40.679 ----------------------------------------------------- 00:15:40.679 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:40.679 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:40.937 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.937 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.937 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:40.937 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:40.937 19:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:40.937 { 00:15:40.937 "subsystems": [ 00:15:40.937 { 00:15:40.937 "subsystem": "bdev", 00:15:40.937 "config": [ 00:15:40.937 { 00:15:40.937 "params": { 00:15:40.937 "io_mechanism": "io_uring", 00:15:40.937 "conserve_cpu": false, 00:15:40.937 "filename": "/dev/nvme0n1", 00:15:40.937 "name": "xnvme_bdev" 00:15:40.937 }, 00:15:40.937 "method": "bdev_xnvme_create" 00:15:40.937 }, 00:15:40.937 { 00:15:40.937 "method": "bdev_wait_for_examine" 00:15:40.937 } 00:15:40.937 ] 00:15:40.937 } 00:15:40.937 ] 00:15:40.937 } 00:15:40.937 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:40.937 fio-3.35 00:15:40.937 Starting 1 thread 00:15:47.596 00:15:47.596 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72054: Tue Nov 26 19:00:17 2024 00:15:47.596 write: IOPS=43.1k, BW=168MiB/s (177MB/s)(842MiB/5001msec); 0 zone resets 00:15:47.596 slat (usec): min=2, max=103, avg= 4.90, stdev= 2.47 00:15:47.596 clat (usec): min=114, max=7624, avg=1291.17, stdev=330.93 00:15:47.596 lat (usec): min=120, max=7647, avg=1296.06, stdev=331.83 00:15:47.596 clat percentiles (usec): 00:15:47.596 | 1.00th=[ 881], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1074], 00:15:47.596 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1287], 00:15:47.596 | 70.00th=[ 1369], 80.00th=[ 1483], 90.00th=[ 1647], 95.00th=[ 1778], 00:15:47.596 | 99.00th=[ 2343], 99.50th=[ 2737], 99.90th=[ 4752], 99.95th=[ 5407], 00:15:47.596 | 99.99th=[ 6849] 00:15:47.596 bw ( KiB/s): min=141704, max=202240, per=100.00%, avg=175375.11, stdev=21142.10, samples=9 00:15:47.596 iops : min=35426, max=50560, avg=43843.78, stdev=5285.52, samples=9 00:15:47.596 lat (usec) : 250=0.02%, 500=0.09%, 750=0.20%, 1000=8.16% 00:15:47.596 lat (msec) : 2=89.47%, 4=1.87%, 10=0.19% 00:15:47.596 cpu : usr=41.20%, sys=57.62%, ctx=16, majf=0, minf=763 00:15:47.596 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.5%, 32=51.0%, >=64=1.7% 00:15:47.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.596 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:47.596 issued rwts: total=0,215633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.596 00:15:47.596 Run status group 0 (all jobs): 00:15:47.597 WRITE: bw=168MiB/s (177MB/s), 168MiB/s-168MiB/s (177MB/s-177MB/s), io=842MiB (883MB), run=5001-5001msec 00:15:48.161 ----------------------------------------------------- 00:15:48.161 Suppressions used: 00:15:48.161 count bytes template 00:15:48.161 1 11 /usr/src/fio/parse.c 00:15:48.161 1 8 libtcmalloc_minimal.so 00:15:48.161 1 904 libcrypto.so 00:15:48.161 ----------------------------------------------------- 00:15:48.161 00:15:48.161 00:15:48.161 real 0m14.634s 00:15:48.161 user 0m7.664s 00:15:48.161 sys 0m6.569s 00:15:48.161 19:00:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.161 19:00:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.161 ************************************ 00:15:48.161 END TEST xnvme_fio_plugin 00:15:48.161 ************************************ 00:15:48.161 19:00:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:48.161 19:00:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:48.161 19:00:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:48.161 19:00:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:48.161 19:00:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:48.161 19:00:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.161 19:00:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.161 ************************************ 00:15:48.161 START TEST xnvme_rpc 00:15:48.161 ************************************ 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72141 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72141 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72141 ']' 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.161 19:00:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.161 [2024-11-26 19:00:19.352470] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:15:48.161 [2024-11-26 19:00:19.352620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72141 ] 00:15:48.418 [2024-11-26 19:00:19.521285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.418 [2024-11-26 19:00:19.623279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.407 xnvme_bdev 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.407 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72141 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72141 ']' 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72141 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72141 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.666 killing process with pid 72141 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72141' 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72141 00:15:49.666 19:00:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72141 00:15:51.570 00:15:51.570 real 0m3.510s 00:15:51.570 user 0m3.779s 00:15:51.570 sys 0m0.438s 00:15:51.570 19:00:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.570 19:00:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.570 ************************************ 00:15:51.570 END TEST xnvme_rpc 00:15:51.570 ************************************ 00:15:51.570 19:00:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:51.570 19:00:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:51.570 19:00:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.570 19:00:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.829 ************************************ 00:15:51.829 START TEST xnvme_bdevperf 00:15:51.829 ************************************ 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:51.829 19:00:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:51.829 { 00:15:51.829 "subsystems": [ 00:15:51.829 { 00:15:51.829 "subsystem": "bdev", 00:15:51.829 "config": [ 00:15:51.829 { 00:15:51.829 "params": { 00:15:51.829 "io_mechanism": "io_uring", 00:15:51.829 "conserve_cpu": true, 00:15:51.829 "filename": "/dev/nvme0n1", 00:15:51.829 "name": "xnvme_bdev" 00:15:51.829 }, 00:15:51.829 "method": "bdev_xnvme_create" 00:15:51.829 }, 00:15:51.829 { 00:15:51.829 "method": "bdev_wait_for_examine" 00:15:51.829 } 00:15:51.829 ] 00:15:51.829 } 00:15:51.829 ] 00:15:51.829 } 00:15:51.829 [2024-11-26 19:00:22.922344] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:15:51.829 [2024-11-26 19:00:22.922511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72216 ] 00:15:52.088 [2024-11-26 19:00:23.107261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.088 [2024-11-26 19:00:23.227315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.345 Running I/O for 5 seconds... 00:15:54.650 56090.00 IOPS, 219.10 MiB/s [2024-11-26T19:00:26.796Z] 55298.00 IOPS, 216.01 MiB/s [2024-11-26T19:00:27.779Z] 54447.33 IOPS, 212.68 MiB/s [2024-11-26T19:00:28.712Z] 54646.25 IOPS, 213.46 MiB/s [2024-11-26T19:00:28.712Z] 54925.00 IOPS, 214.55 MiB/s 00:15:57.497 Latency(us) 00:15:57.497 [2024-11-26T19:00:28.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.497 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:57.497 xnvme_bdev : 5.00 54914.46 214.51 0.00 0.00 1161.69 108.92 4617.31 00:15:57.497 [2024-11-26T19:00:28.712Z] =================================================================================================================== 00:15:57.497 [2024-11-26T19:00:28.712Z] Total : 54914.46 214.51 0.00 0.00 1161.69 108.92 4617.31 00:15:58.431 19:00:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:58.431 19:00:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:58.431 19:00:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:58.431 19:00:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:58.431 19:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:58.431 { 00:15:58.431 "subsystems": [ 00:15:58.431 { 00:15:58.431 "subsystem": "bdev", 00:15:58.431 "config": [ 00:15:58.431 { 00:15:58.431 "params": { 00:15:58.431 "io_mechanism": "io_uring", 00:15:58.431 "conserve_cpu": true, 00:15:58.431 "filename": "/dev/nvme0n1", 00:15:58.431 "name": "xnvme_bdev" 00:15:58.431 }, 00:15:58.431 "method": "bdev_xnvme_create" 00:15:58.431 }, 00:15:58.431 { 00:15:58.431 "method": "bdev_wait_for_examine" 00:15:58.431 } 00:15:58.431 ] 00:15:58.431 } 00:15:58.431 ] 00:15:58.431 } 00:15:58.689 [2024-11-26 19:00:29.693212] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:15:58.690 [2024-11-26 19:00:29.693377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72296 ] 00:15:58.690 [2024-11-26 19:00:29.892278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.949 [2024-11-26 19:00:30.017131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.207 Running I/O for 5 seconds... 00:16:01.129 36992.00 IOPS, 144.50 MiB/s [2024-11-26T19:00:33.720Z] 36672.00 IOPS, 143.25 MiB/s [2024-11-26T19:00:34.654Z] 36864.00 IOPS, 144.00 MiB/s [2024-11-26T19:00:35.588Z] 37264.00 IOPS, 145.56 MiB/s 00:16:04.373 Latency(us) 00:16:04.373 [2024-11-26T19:00:35.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.373 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:04.373 xnvme_bdev : 5.00 36972.52 144.42 0.00 0.00 1724.23 886.23 6553.60 00:16:04.373 [2024-11-26T19:00:35.588Z] =================================================================================================================== 00:16:04.373 [2024-11-26T19:00:35.588Z] Total : 36972.52 144.42 0.00 0.00 1724.23 886.23 6553.60 00:16:05.305 00:16:05.305 real 0m13.536s 00:16:05.305 user 0m9.430s 00:16:05.305 sys 0m3.539s 00:16:05.305 19:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.305 19:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 ************************************ 00:16:05.305 END TEST xnvme_bdevperf 00:16:05.305 ************************************ 00:16:05.305 19:00:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:05.305 19:00:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:05.305 19:00:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.305 19:00:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 ************************************ 00:16:05.305 START TEST xnvme_fio_plugin 00:16:05.305 ************************************ 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:05.305 19:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:05.305 { 00:16:05.305 "subsystems": [ 00:16:05.305 { 00:16:05.305 "subsystem": "bdev", 00:16:05.305 "config": [ 00:16:05.305 { 00:16:05.305 "params": { 00:16:05.305 "io_mechanism": "io_uring", 00:16:05.305 "conserve_cpu": true, 00:16:05.305 "filename": "/dev/nvme0n1", 00:16:05.305 "name": "xnvme_bdev" 00:16:05.305 }, 00:16:05.305 "method": "bdev_xnvme_create" 00:16:05.305 }, 00:16:05.305 { 00:16:05.305 "method": "bdev_wait_for_examine" 00:16:05.305 } 00:16:05.305 ] 00:16:05.305 } 00:16:05.305 ] 00:16:05.305 } 00:16:05.561 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:05.561 fio-3.35 00:16:05.561 Starting 1 thread 00:16:12.166 00:16:12.166 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72416: Tue Nov 26 19:00:42 2024 00:16:12.166 read: IOPS=49.1k, BW=192MiB/s (201MB/s)(960MiB/5001msec) 00:16:12.166 slat (usec): min=2, max=151, avg= 4.02, stdev= 1.80 00:16:12.166 clat (usec): min=157, max=4583, avg=1139.19, stdev=180.26 00:16:12.166 lat (usec): min=209, max=4587, avg=1143.21, stdev=180.91 00:16:12.166 clat percentiles (usec): 00:16:12.166 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 1004], 00:16:12.166 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:16:12.166 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1352], 95.00th=[ 1483], 00:16:12.166 | 99.00th=[ 1729], 99.50th=[ 1811], 99.90th=[ 2073], 99.95th=[ 2769], 00:16:12.166 | 99.99th=[ 3589] 00:16:12.166 bw ( KiB/s): min=178176, max=216064, per=99.42%, avg=195405.33, stdev=13652.87, samples=9 00:16:12.166 iops : min=44544, max=54016, avg=48851.33, stdev=3413.22, samples=9 00:16:12.166 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=19.20% 00:16:12.166 lat (msec) : 2=80.67%, 4=0.12%, 10=0.01% 00:16:12.166 cpu : usr=68.56%, sys=27.32%, ctx=9, majf=0, minf=762 00:16:12.166 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:12.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.166 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:12.166 issued rwts: total=245742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.166 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:12.166 00:16:12.166 Run status group 0 (all jobs): 00:16:12.166 READ: bw=192MiB/s (201MB/s), 192MiB/s-192MiB/s (201MB/s-201MB/s), io=960MiB (1007MB), run=5001-5001msec 00:16:12.424 ----------------------------------------------------- 00:16:12.424 Suppressions used: 00:16:12.424 count bytes template 00:16:12.424 1 11 /usr/src/fio/parse.c 00:16:12.424 1 8 libtcmalloc_minimal.so 00:16:12.424 1 904 libcrypto.so 00:16:12.424 ----------------------------------------------------- 00:16:12.424 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:12.717 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:12.718 19:00:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:12.718 { 00:16:12.718 "subsystems": [ 00:16:12.718 { 00:16:12.718 "subsystem": "bdev", 00:16:12.718 "config": [ 00:16:12.718 { 00:16:12.718 "params": { 00:16:12.718 "io_mechanism": "io_uring", 00:16:12.718 "conserve_cpu": true, 00:16:12.718 "filename": "/dev/nvme0n1", 00:16:12.718 "name": "xnvme_bdev" 00:16:12.718 }, 00:16:12.718 "method": "bdev_xnvme_create" 00:16:12.718 }, 00:16:12.718 { 00:16:12.718 "method": "bdev_wait_for_examine" 00:16:12.718 } 00:16:12.718 ] 00:16:12.718 } 00:16:12.718 ] 00:16:12.718 } 00:16:12.718 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:12.718 fio-3.35 00:16:12.718 Starting 1 thread 00:16:19.282 00:16:19.282 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72512: Tue Nov 26 19:00:49 2024 00:16:19.282 write: IOPS=46.5k, BW=182MiB/s (190MB/s)(908MiB/5001msec); 0 zone resets 00:16:19.282 slat (usec): min=2, max=109, avg= 4.40, stdev= 1.97 00:16:19.282 clat (usec): min=151, max=7078, avg=1199.17, stdev=207.57 00:16:19.282 lat (usec): min=159, max=7082, avg=1203.57, stdev=208.01 00:16:19.282 clat percentiles (usec): 00:16:19.282 | 1.00th=[ 922], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1057], 00:16:19.282 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:16:19.282 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1401], 95.00th=[ 1516], 00:16:19.282 | 99.00th=[ 1762], 99.50th=[ 1860], 99.90th=[ 3490], 99.95th=[ 4686], 00:16:19.282 | 99.99th=[ 6063] 00:16:19.282 bw ( KiB/s): min=174080, max=198656, per=100.00%, avg=186083.56, stdev=7335.69, samples=9 00:16:19.282 iops : min=43520, max=49664, avg=46520.89, stdev=1833.92, samples=9 00:16:19.282 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=8.33% 00:16:19.282 lat (msec) : 2=91.38%, 4=0.18%, 10=0.08% 00:16:19.282 cpu : usr=67.36%, sys=28.58%, ctx=23, majf=0, minf=763 00:16:19.282 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:19.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.282 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:19.282 issued rwts: total=0,232563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.282 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:19.282 00:16:19.282 Run status group 0 (all jobs): 00:16:19.282 WRITE: bw=182MiB/s (190MB/s), 182MiB/s-182MiB/s (190MB/s-190MB/s), io=908MiB (953MB), run=5001-5001msec 00:16:19.850 ----------------------------------------------------- 00:16:19.850 Suppressions used: 00:16:19.850 count bytes template 00:16:19.850 1 11 /usr/src/fio/parse.c 00:16:19.850 1 8 libtcmalloc_minimal.so 00:16:19.850 1 904 libcrypto.so 00:16:19.850 ----------------------------------------------------- 00:16:19.850 00:16:19.850 00:16:19.850 real 0m14.533s 00:16:19.850 user 0m10.418s 00:16:19.850 sys 0m3.431s 00:16:19.850 ************************************ 00:16:19.850 END TEST xnvme_fio_plugin 00:16:19.850 ************************************ 00:16:19.850 19:00:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.850 19:00:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:19.850 19:00:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:19.850 19:00:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:19.850 19:00:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.850 19:00:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.850 ************************************ 00:16:19.850 START TEST xnvme_rpc 00:16:19.850 ************************************ 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72604 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72604 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72604 ']' 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.850 19:00:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.109 [2024-11-26 19:00:51.086974] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:16:20.110 [2024-11-26 19:00:51.087404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72604 ] 00:16:20.110 [2024-11-26 19:00:51.272278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.369 [2024-11-26 19:00:51.398347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.304 xnvme_bdev 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72604 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72604 ']' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72604 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72604 00:16:21.304 killing process with pid 72604 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72604' 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72604 00:16:21.304 19:00:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72604 00:16:23.830 ************************************ 00:16:23.830 END TEST xnvme_rpc 00:16:23.830 ************************************ 00:16:23.830 00:16:23.830 real 0m3.570s 00:16:23.830 user 0m3.860s 00:16:23.830 sys 0m0.469s 00:16:23.830 19:00:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.830 19:00:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.830 19:00:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:23.830 19:00:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:23.830 19:00:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.830 19:00:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.830 ************************************ 00:16:23.830 START TEST xnvme_bdevperf 00:16:23.830 ************************************ 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:23.830 19:00:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:23.830 { 00:16:23.830 "subsystems": [ 00:16:23.831 { 00:16:23.831 "subsystem": "bdev", 00:16:23.831 "config": [ 00:16:23.831 { 00:16:23.831 "params": { 00:16:23.831 "io_mechanism": "io_uring_cmd", 00:16:23.831 "conserve_cpu": false, 00:16:23.831 "filename": "/dev/ng0n1", 00:16:23.831 "name": "xnvme_bdev" 00:16:23.831 }, 00:16:23.831 "method": "bdev_xnvme_create" 00:16:23.831 }, 00:16:23.831 { 00:16:23.831 "method": "bdev_wait_for_examine" 00:16:23.831 } 00:16:23.831 ] 00:16:23.831 } 00:16:23.831 ] 00:16:23.831 } 00:16:23.831 [2024-11-26 19:00:54.683626] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:16:23.831 [2024-11-26 19:00:54.684010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72680 ] 00:16:23.831 [2024-11-26 19:00:54.867215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.831 [2024-11-26 19:00:54.970166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.088 Running I/O for 5 seconds... 00:16:26.396 52555.00 IOPS, 205.29 MiB/s [2024-11-26T19:00:58.544Z] 50960.50 IOPS, 199.06 MiB/s [2024-11-26T19:00:59.478Z] 50848.00 IOPS, 198.62 MiB/s [2024-11-26T19:01:00.413Z] 51255.00 IOPS, 200.21 MiB/s [2024-11-26T19:01:00.413Z] 50616.80 IOPS, 197.72 MiB/s 00:16:29.198 Latency(us) 00:16:29.198 [2024-11-26T19:01:00.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.198 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:29.198 xnvme_bdev : 5.00 50602.85 197.67 0.00 0.00 1260.46 737.28 5510.98 00:16:29.198 [2024-11-26T19:01:00.413Z] =================================================================================================================== 00:16:29.198 [2024-11-26T19:01:00.413Z] Total : 50602.85 197.67 0.00 0.00 1260.46 737.28 5510.98 00:16:30.572 19:01:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:30.572 19:01:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:30.572 19:01:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:30.572 19:01:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:30.572 19:01:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:30.572 { 00:16:30.572 "subsystems": [ 00:16:30.572 { 00:16:30.572 "subsystem": "bdev", 00:16:30.572 "config": [ 00:16:30.572 { 00:16:30.572 "params": { 00:16:30.572 "io_mechanism": "io_uring_cmd", 00:16:30.572 "conserve_cpu": false, 00:16:30.572 "filename": "/dev/ng0n1", 00:16:30.572 "name": "xnvme_bdev" 00:16:30.572 }, 00:16:30.572 "method": "bdev_xnvme_create" 00:16:30.572 }, 00:16:30.572 { 00:16:30.572 "method": "bdev_wait_for_examine" 00:16:30.572 } 00:16:30.572 ] 00:16:30.572 } 00:16:30.572 ] 00:16:30.572 } 00:16:30.572 [2024-11-26 19:01:01.444599] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:16:30.572 [2024-11-26 19:01:01.444748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72757 ] 00:16:30.572 [2024-11-26 19:01:01.623082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.572 [2024-11-26 19:01:01.761445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.138 Running I/O for 5 seconds... 00:16:33.009 45056.00 IOPS, 176.00 MiB/s [2024-11-26T19:01:05.159Z] 44032.00 IOPS, 172.00 MiB/s [2024-11-26T19:01:06.535Z] 44288.00 IOPS, 173.00 MiB/s [2024-11-26T19:01:07.474Z] 44928.00 IOPS, 175.50 MiB/s 00:16:36.259 Latency(us) 00:16:36.259 [2024-11-26T19:01:07.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.259 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:36.259 xnvme_bdev : 5.00 45295.63 176.94 0.00 0.00 1407.70 863.88 11319.85 00:16:36.259 [2024-11-26T19:01:07.474Z] =================================================================================================================== 00:16:36.259 [2024-11-26T19:01:07.474Z] Total : 45295.63 176.94 0.00 0.00 1407.70 863.88 11319.85 00:16:37.194 19:01:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:37.194 19:01:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:37.194 19:01:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:37.194 19:01:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:37.194 19:01:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:37.194 { 00:16:37.194 "subsystems": [ 00:16:37.194 { 00:16:37.194 "subsystem": "bdev", 00:16:37.194 "config": [ 00:16:37.194 { 00:16:37.194 "params": { 00:16:37.194 "io_mechanism": "io_uring_cmd", 00:16:37.194 "conserve_cpu": false, 00:16:37.194 "filename": "/dev/ng0n1", 00:16:37.194 "name": "xnvme_bdev" 00:16:37.194 }, 00:16:37.194 "method": "bdev_xnvme_create" 00:16:37.194 }, 00:16:37.194 { 00:16:37.194 "method": "bdev_wait_for_examine" 00:16:37.194 } 00:16:37.194 ] 00:16:37.194 } 00:16:37.194 ] 00:16:37.194 } 00:16:37.194 [2024-11-26 19:01:08.321384] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:16:37.194 [2024-11-26 19:01:08.321600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72837 ] 00:16:37.452 [2024-11-26 19:01:08.516801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.452 [2024-11-26 19:01:08.620021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.019 Running I/O for 5 seconds... 00:16:39.900 67008.00 IOPS, 261.75 MiB/s [2024-11-26T19:01:12.051Z] 68000.00 IOPS, 265.62 MiB/s [2024-11-26T19:01:12.985Z] 66026.67 IOPS, 257.92 MiB/s [2024-11-26T19:01:14.361Z] 66912.00 IOPS, 261.38 MiB/s [2024-11-26T19:01:14.361Z] 67750.40 IOPS, 264.65 MiB/s 00:16:43.146 Latency(us) 00:16:43.146 [2024-11-26T19:01:14.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.146 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:43.147 xnvme_bdev : 5.00 67730.30 264.57 0.00 0.00 940.71 562.27 3306.59 00:16:43.147 [2024-11-26T19:01:14.362Z] =================================================================================================================== 00:16:43.147 [2024-11-26T19:01:14.362Z] Total : 67730.30 264.57 0.00 0.00 940.71 562.27 3306.59 00:16:44.082 19:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:44.082 19:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:44.082 19:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:44.082 19:01:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:44.082 19:01:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:44.082 { 00:16:44.082 "subsystems": [ 00:16:44.082 { 00:16:44.082 "subsystem": "bdev", 00:16:44.082 "config": [ 00:16:44.082 { 00:16:44.082 "params": { 00:16:44.082 "io_mechanism": "io_uring_cmd", 00:16:44.082 "conserve_cpu": false, 00:16:44.082 "filename": "/dev/ng0n1", 00:16:44.082 "name": "xnvme_bdev" 00:16:44.082 }, 00:16:44.082 "method": "bdev_xnvme_create" 00:16:44.082 }, 00:16:44.082 { 00:16:44.082 "method": "bdev_wait_for_examine" 00:16:44.082 } 00:16:44.082 ] 00:16:44.082 } 00:16:44.082 ] 00:16:44.082 } 00:16:44.082 [2024-11-26 19:01:15.068768] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:16:44.082 [2024-11-26 19:01:15.068914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72912 ] 00:16:44.082 [2024-11-26 19:01:15.243303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.340 [2024-11-26 19:01:15.346916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.598 Running I/O for 5 seconds... 00:16:46.554 43267.00 IOPS, 169.01 MiB/s [2024-11-26T19:01:18.704Z] 42155.50 IOPS, 164.67 MiB/s [2024-11-26T19:01:20.078Z] 41403.67 IOPS, 161.73 MiB/s [2024-11-26T19:01:21.012Z] 41086.00 IOPS, 160.49 MiB/s 00:16:49.797 Latency(us) 00:16:49.797 [2024-11-26T19:01:21.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.797 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:49.797 xnvme_bdev : 5.00 40875.56 159.67 0.00 0.00 1560.91 114.50 9651.67 00:16:49.797 [2024-11-26T19:01:21.012Z] =================================================================================================================== 00:16:49.797 [2024-11-26T19:01:21.012Z] Total : 40875.56 159.67 0.00 0.00 1560.91 114.50 9651.67 00:16:50.733 ************************************ 00:16:50.733 END TEST xnvme_bdevperf 00:16:50.733 ************************************ 00:16:50.733 00:16:50.733 real 0m27.178s 00:16:50.733 user 0m16.196s 00:16:50.733 sys 0m10.509s 00:16:50.733 19:01:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.733 19:01:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.733 19:01:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:50.733 19:01:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.733 19:01:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.733 19:01:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.733 ************************************ 00:16:50.733 START TEST xnvme_fio_plugin 00:16:50.733 ************************************ 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:50.733 19:01:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.733 { 00:16:50.733 "subsystems": [ 00:16:50.733 { 00:16:50.733 "subsystem": "bdev", 00:16:50.733 "config": [ 00:16:50.733 { 00:16:50.733 "params": { 00:16:50.733 "io_mechanism": "io_uring_cmd", 00:16:50.733 "conserve_cpu": false, 00:16:50.733 "filename": "/dev/ng0n1", 00:16:50.733 "name": "xnvme_bdev" 00:16:50.733 }, 00:16:50.733 "method": "bdev_xnvme_create" 00:16:50.733 }, 00:16:50.733 { 00:16:50.733 "method": "bdev_wait_for_examine" 00:16:50.733 } 00:16:50.733 ] 00:16:50.733 } 00:16:50.733 ] 00:16:50.733 } 00:16:50.992 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:50.992 fio-3.35 00:16:50.992 Starting 1 thread 00:16:57.559 00:16:57.559 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73035: Tue Nov 26 19:01:27 2024 00:16:57.559 read: IOPS=51.4k, BW=201MiB/s (210MB/s)(1004MiB/5001msec) 00:16:57.559 slat (usec): min=3, max=129, avg= 3.75, stdev= 1.36 00:16:57.559 clat (usec): min=748, max=4805, avg=1092.91, stdev=178.74 00:16:57.559 lat (usec): min=751, max=4812, avg=1096.67, stdev=178.98 00:16:57.559 clat percentiles (usec): 00:16:57.559 | 1.00th=[ 840], 5.00th=[ 898], 10.00th=[ 930], 20.00th=[ 971], 00:16:57.559 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:16:57.559 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1303], 95.00th=[ 1418], 00:16:57.559 | 99.00th=[ 1614], 99.50th=[ 1696], 99.90th=[ 2311], 99.95th=[ 3130], 00:16:57.559 | 99.99th=[ 4686] 00:16:57.559 bw ( KiB/s): min=189440, max=217600, per=100.00%, avg=206108.44, stdev=11545.89, samples=9 00:16:57.559 iops : min=47360, max=54400, avg=51527.11, stdev=2886.47, samples=9 00:16:57.559 lat (usec) : 750=0.01%, 1000=29.89% 00:16:57.559 lat (msec) : 2=69.95%, 4=0.11%, 10=0.05% 00:16:57.559 cpu : usr=43.32%, sys=55.78%, ctx=11, majf=0, minf=762 00:16:57.559 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:57.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.559 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:57.559 issued rwts: total=257006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:57.559 00:16:57.559 Run status group 0 (all jobs): 00:16:57.559 READ: bw=201MiB/s (210MB/s), 201MiB/s-201MiB/s (210MB/s-210MB/s), io=1004MiB (1053MB), run=5001-5001msec 00:16:58.127 ----------------------------------------------------- 00:16:58.127 Suppressions used: 00:16:58.127 count bytes template 00:16:58.127 1 11 /usr/src/fio/parse.c 00:16:58.127 1 8 libtcmalloc_minimal.so 00:16:58.127 1 904 libcrypto.so 00:16:58.127 ----------------------------------------------------- 00:16:58.127 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:58.127 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:58.128 19:01:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:58.128 { 00:16:58.128 "subsystems": [ 00:16:58.128 { 00:16:58.128 "subsystem": "bdev", 00:16:58.128 "config": [ 00:16:58.128 { 00:16:58.128 "params": { 00:16:58.128 "io_mechanism": "io_uring_cmd", 00:16:58.128 "conserve_cpu": false, 00:16:58.128 "filename": "/dev/ng0n1", 00:16:58.128 "name": "xnvme_bdev" 00:16:58.128 }, 00:16:58.128 "method": "bdev_xnvme_create" 00:16:58.128 }, 00:16:58.128 { 00:16:58.128 "method": "bdev_wait_for_examine" 00:16:58.128 } 00:16:58.128 ] 00:16:58.128 } 00:16:58.128 ] 00:16:58.128 } 00:16:58.128 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:58.128 fio-3.35 00:16:58.128 Starting 1 thread 00:17:04.727 00:17:04.727 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73127: Tue Nov 26 19:01:35 2024 00:17:04.727 write: IOPS=50.0k, BW=195MiB/s (205MB/s)(977MiB/5001msec); 0 zone resets 00:17:04.727 slat (usec): min=3, max=144, avg= 4.30, stdev= 1.81 00:17:04.727 clat (usec): min=691, max=2811, avg=1108.70, stdev=166.75 00:17:04.727 lat (usec): min=694, max=2818, avg=1113.00, stdev=167.46 00:17:04.727 clat percentiles (usec): 00:17:04.727 | 1.00th=[ 857], 5.00th=[ 898], 10.00th=[ 930], 20.00th=[ 979], 00:17:04.727 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1123], 00:17:04.727 | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[ 1319], 95.00th=[ 1434], 00:17:04.727 | 99.00th=[ 1647], 99.50th=[ 1713], 99.90th=[ 1860], 99.95th=[ 2311], 00:17:04.727 | 99.99th=[ 2704] 00:17:04.727 bw ( KiB/s): min=195072, max=217600, per=100.00%, avg=200988.44, stdev=7590.35, samples=9 00:17:04.727 iops : min=48768, max=54400, avg=50247.11, stdev=1897.59, samples=9 00:17:04.727 lat (usec) : 750=0.01%, 1000=26.52% 00:17:04.727 lat (msec) : 2=73.40%, 4=0.06% 00:17:04.727 cpu : usr=45.90%, sys=53.08%, ctx=12, majf=0, minf=763 00:17:04.727 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:04.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.727 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:04.727 issued rwts: total=0,250176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:04.727 00:17:04.727 Run status group 0 (all jobs): 00:17:04.727 WRITE: bw=195MiB/s (205MB/s), 195MiB/s-195MiB/s (205MB/s-205MB/s), io=977MiB (1025MB), run=5001-5001msec 00:17:05.295 ----------------------------------------------------- 00:17:05.295 Suppressions used: 00:17:05.295 count bytes template 00:17:05.295 1 11 /usr/src/fio/parse.c 00:17:05.295 1 8 libtcmalloc_minimal.so 00:17:05.295 1 904 libcrypto.so 00:17:05.295 ----------------------------------------------------- 00:17:05.295 00:17:05.295 00:17:05.295 real 0m14.565s 00:17:05.295 user 0m8.131s 00:17:05.295 sys 0m6.049s 00:17:05.295 19:01:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.295 19:01:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 ************************************ 00:17:05.295 END TEST xnvme_fio_plugin 00:17:05.295 ************************************ 00:17:05.295 19:01:36 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:05.295 19:01:36 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:05.295 19:01:36 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:05.295 19:01:36 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:05.295 19:01:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.295 19:01:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.295 19:01:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 ************************************ 00:17:05.295 START TEST xnvme_rpc 00:17:05.295 ************************************ 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73207 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73207 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73207 ']' 00:17:05.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.295 19:01:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 [2024-11-26 19:01:36.548056] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:05.554 [2024-11-26 19:01:36.548538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73207 ] 00:17:05.554 [2024-11-26 19:01:36.739311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.813 [2024-11-26 19:01:36.870316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 xnvme_bdev 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73207 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73207 ']' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73207 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73207 00:17:06.747 killing process with pid 73207 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73207' 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73207 00:17:06.747 19:01:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73207 00:17:09.282 00:17:09.282 real 0m3.538s 00:17:09.282 user 0m3.831s 00:17:09.282 sys 0m0.443s 00:17:09.282 19:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.282 19:01:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.282 ************************************ 00:17:09.282 END TEST xnvme_rpc 00:17:09.282 ************************************ 00:17:09.282 19:01:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:09.282 19:01:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.282 19:01:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.282 19:01:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:09.282 ************************************ 00:17:09.282 START TEST xnvme_bdevperf 00:17:09.282 ************************************ 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:09.282 19:01:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:09.282 { 00:17:09.282 "subsystems": [ 00:17:09.282 { 00:17:09.282 "subsystem": "bdev", 00:17:09.282 "config": [ 00:17:09.282 { 00:17:09.282 "params": { 00:17:09.282 "io_mechanism": "io_uring_cmd", 00:17:09.282 "conserve_cpu": true, 00:17:09.282 "filename": "/dev/ng0n1", 00:17:09.282 "name": "xnvme_bdev" 00:17:09.282 }, 00:17:09.282 "method": "bdev_xnvme_create" 00:17:09.282 }, 00:17:09.282 { 00:17:09.282 "method": "bdev_wait_for_examine" 00:17:09.282 } 00:17:09.282 ] 00:17:09.282 } 00:17:09.282 ] 00:17:09.282 } 00:17:09.282 [2024-11-26 19:01:40.116953] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:09.282 [2024-11-26 19:01:40.117387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73287 ] 00:17:09.282 [2024-11-26 19:01:40.302021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.282 [2024-11-26 19:01:40.430366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.850 Running I/O for 5 seconds... 00:17:11.739 55360.00 IOPS, 216.25 MiB/s [2024-11-26T19:01:43.914Z] 54272.00 IOPS, 212.00 MiB/s [2024-11-26T19:01:44.847Z] 54592.00 IOPS, 213.25 MiB/s [2024-11-26T19:01:45.781Z] 54143.75 IOPS, 211.50 MiB/s [2024-11-26T19:01:45.781Z] 54271.80 IOPS, 212.00 MiB/s 00:17:14.566 Latency(us) 00:17:14.566 [2024-11-26T19:01:45.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.566 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:14.566 xnvme_bdev : 5.00 54261.24 211.96 0.00 0.00 1175.56 796.86 7745.16 00:17:14.566 [2024-11-26T19:01:45.781Z] =================================================================================================================== 00:17:14.566 [2024-11-26T19:01:45.781Z] Total : 54261.24 211.96 0.00 0.00 1175.56 796.86 7745.16 00:17:15.941 19:01:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:15.941 19:01:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:15.941 19:01:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:15.941 19:01:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:15.941 19:01:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:15.941 { 00:17:15.941 "subsystems": [ 00:17:15.941 { 00:17:15.941 "subsystem": "bdev", 00:17:15.941 "config": [ 00:17:15.941 { 00:17:15.941 "params": { 00:17:15.941 "io_mechanism": "io_uring_cmd", 00:17:15.941 "conserve_cpu": true, 00:17:15.941 "filename": "/dev/ng0n1", 00:17:15.941 "name": "xnvme_bdev" 00:17:15.941 }, 00:17:15.941 "method": "bdev_xnvme_create" 00:17:15.941 }, 00:17:15.941 { 00:17:15.941 "method": "bdev_wait_for_examine" 00:17:15.941 } 00:17:15.941 ] 00:17:15.941 } 00:17:15.941 ] 00:17:15.941 } 00:17:15.941 [2024-11-26 19:01:46.866797] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:15.941 [2024-11-26 19:01:46.866940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73361 ] 00:17:15.941 [2024-11-26 19:01:47.042586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.941 [2024-11-26 19:01:47.146908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.508 Running I/O for 5 seconds... 00:17:18.380 44224.00 IOPS, 172.75 MiB/s [2024-11-26T19:01:50.531Z] 44448.00 IOPS, 173.62 MiB/s [2024-11-26T19:01:51.907Z] 44096.00 IOPS, 172.25 MiB/s [2024-11-26T19:01:52.842Z] 44304.00 IOPS, 173.06 MiB/s 00:17:21.627 Latency(us) 00:17:21.627 [2024-11-26T19:01:52.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.627 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:21.627 xnvme_bdev : 5.00 44814.30 175.06 0.00 0.00 1422.92 804.31 5093.93 00:17:21.627 [2024-11-26T19:01:52.842Z] =================================================================================================================== 00:17:21.627 [2024-11-26T19:01:52.842Z] Total : 44814.30 175.06 0.00 0.00 1422.92 804.31 5093.93 00:17:22.647 19:01:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:22.647 19:01:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:22.647 19:01:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:22.647 19:01:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:22.647 19:01:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:22.647 { 00:17:22.647 "subsystems": [ 00:17:22.647 { 00:17:22.647 "subsystem": "bdev", 00:17:22.647 "config": [ 00:17:22.647 { 00:17:22.647 "params": { 00:17:22.647 "io_mechanism": "io_uring_cmd", 00:17:22.647 "conserve_cpu": true, 00:17:22.647 "filename": "/dev/ng0n1", 00:17:22.647 "name": "xnvme_bdev" 00:17:22.647 }, 00:17:22.647 "method": "bdev_xnvme_create" 00:17:22.647 }, 00:17:22.647 { 00:17:22.647 "method": "bdev_wait_for_examine" 00:17:22.647 } 00:17:22.647 ] 00:17:22.647 } 00:17:22.647 ] 00:17:22.647 } 00:17:22.647 [2024-11-26 19:01:53.621881] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:22.647 [2024-11-26 19:01:53.622058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73442 ] 00:17:22.647 [2024-11-26 19:01:53.807077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.906 [2024-11-26 19:01:53.912941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.164 Running I/O for 5 seconds... 00:17:25.033 66688.00 IOPS, 260.50 MiB/s [2024-11-26T19:01:57.625Z] 67616.00 IOPS, 264.12 MiB/s [2024-11-26T19:01:58.560Z] 67306.67 IOPS, 262.92 MiB/s [2024-11-26T19:01:59.495Z] 68224.00 IOPS, 266.50 MiB/s 00:17:28.280 Latency(us) 00:17:28.280 [2024-11-26T19:01:59.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.280 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:28.280 xnvme_bdev : 5.00 69093.31 269.90 0.00 0.00 922.02 487.80 4736.47 00:17:28.280 [2024-11-26T19:01:59.495Z] =================================================================================================================== 00:17:28.280 [2024-11-26T19:01:59.495Z] Total : 69093.31 269.90 0.00 0.00 922.02 487.80 4736.47 00:17:29.234 19:02:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:29.234 19:02:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:29.234 19:02:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:29.234 19:02:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:29.234 19:02:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:29.234 { 00:17:29.234 "subsystems": [ 00:17:29.234 { 00:17:29.234 "subsystem": "bdev", 00:17:29.234 "config": [ 00:17:29.234 { 00:17:29.234 "params": { 00:17:29.234 "io_mechanism": "io_uring_cmd", 00:17:29.234 "conserve_cpu": true, 00:17:29.234 "filename": "/dev/ng0n1", 00:17:29.234 "name": "xnvme_bdev" 00:17:29.234 }, 00:17:29.234 "method": "bdev_xnvme_create" 00:17:29.234 }, 00:17:29.234 { 00:17:29.234 "method": "bdev_wait_for_examine" 00:17:29.234 } 00:17:29.234 ] 00:17:29.234 } 00:17:29.234 ] 00:17:29.234 } 00:17:29.234 [2024-11-26 19:02:00.319797] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:29.234 [2024-11-26 19:02:00.319979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73517 ] 00:17:29.493 [2024-11-26 19:02:00.493822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.493 [2024-11-26 19:02:00.596739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.751 Running I/O for 5 seconds... 00:17:32.059 41966.00 IOPS, 163.93 MiB/s [2024-11-26T19:02:04.209Z] 39922.00 IOPS, 155.95 MiB/s [2024-11-26T19:02:05.144Z] 41006.67 IOPS, 160.18 MiB/s [2024-11-26T19:02:06.153Z] 40905.50 IOPS, 159.79 MiB/s [2024-11-26T19:02:06.153Z] 41294.40 IOPS, 161.31 MiB/s 00:17:34.938 Latency(us) 00:17:34.938 [2024-11-26T19:02:06.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.938 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:34.938 xnvme_bdev : 5.01 41258.60 161.17 0.00 0.00 1546.16 122.88 13762.56 00:17:34.938 [2024-11-26T19:02:06.153Z] =================================================================================================================== 00:17:34.938 [2024-11-26T19:02:06.153Z] Total : 41258.60 161.17 0.00 0.00 1546.16 122.88 13762.56 00:17:35.873 00:17:35.873 real 0m27.003s 00:17:35.873 user 0m20.475s 00:17:35.873 sys 0m5.056s 00:17:35.873 ************************************ 00:17:35.873 END TEST xnvme_bdevperf 00:17:35.873 ************************************ 00:17:35.873 19:02:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.873 19:02:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:35.873 19:02:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:35.873 19:02:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.873 19:02:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.873 19:02:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.873 ************************************ 00:17:35.873 START TEST xnvme_fio_plugin 00:17:35.873 ************************************ 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:35.873 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:35.874 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:36.133 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:36.133 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:36.133 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:36.133 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:36.133 19:02:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:36.133 { 00:17:36.133 "subsystems": [ 00:17:36.133 { 00:17:36.133 "subsystem": "bdev", 00:17:36.133 "config": [ 00:17:36.133 { 00:17:36.133 "params": { 00:17:36.133 "io_mechanism": "io_uring_cmd", 00:17:36.133 "conserve_cpu": true, 00:17:36.133 "filename": "/dev/ng0n1", 00:17:36.133 "name": "xnvme_bdev" 00:17:36.133 }, 00:17:36.133 "method": "bdev_xnvme_create" 00:17:36.133 }, 00:17:36.133 { 00:17:36.133 "method": "bdev_wait_for_examine" 00:17:36.133 } 00:17:36.133 ] 00:17:36.133 } 00:17:36.133 ] 00:17:36.133 } 00:17:36.133 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:36.133 fio-3.35 00:17:36.133 Starting 1 thread 00:17:42.693 00:17:42.693 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73635: Tue Nov 26 19:02:13 2024 00:17:42.693 read: IOPS=51.5k, BW=201MiB/s (211MB/s)(1007MiB/5002msec) 00:17:42.693 slat (usec): min=3, max=160, avg= 3.78, stdev= 1.39 00:17:42.693 clat (usec): min=727, max=3221, avg=1091.73, stdev=166.12 00:17:42.693 lat (usec): min=731, max=3236, avg=1095.50, stdev=166.48 00:17:42.693 clat percentiles (usec): 00:17:42.693 | 1.00th=[ 824], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 963], 00:17:42.693 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1106], 00:17:42.693 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1287], 95.00th=[ 1418], 00:17:42.693 | 99.00th=[ 1647], 99.50th=[ 1729], 99.90th=[ 1942], 99.95th=[ 2040], 00:17:42.693 | 99.99th=[ 3064] 00:17:42.693 bw ( KiB/s): min=192000, max=224768, per=100.00%, avg=206222.22, stdev=10048.70, samples=9 00:17:42.693 iops : min=48000, max=56192, avg=51555.56, stdev=2512.18, samples=9 00:17:42.693 lat (usec) : 750=0.01%, 1000=29.74% 00:17:42.693 lat (msec) : 2=70.18%, 4=0.07% 00:17:42.693 cpu : usr=74.15%, sys=22.88%, ctx=14, majf=0, minf=762 00:17:42.693 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:42.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.693 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:42.693 issued rwts: total=257664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:42.693 00:17:42.693 Run status group 0 (all jobs): 00:17:42.693 READ: bw=201MiB/s (211MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s), io=1007MiB (1055MB), run=5002-5002msec 00:17:43.261 ----------------------------------------------------- 00:17:43.261 Suppressions used: 00:17:43.261 count bytes template 00:17:43.261 1 11 /usr/src/fio/parse.c 00:17:43.261 1 8 libtcmalloc_minimal.so 00:17:43.261 1 904 libcrypto.so 00:17:43.261 ----------------------------------------------------- 00:17:43.261 00:17:43.261 19:02:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:43.261 19:02:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:43.261 19:02:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:43.262 19:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:43.262 { 00:17:43.262 "subsystems": [ 00:17:43.262 { 00:17:43.262 "subsystem": "bdev", 00:17:43.262 "config": [ 00:17:43.262 { 00:17:43.262 "params": { 00:17:43.262 "io_mechanism": "io_uring_cmd", 00:17:43.262 "conserve_cpu": true, 00:17:43.262 "filename": "/dev/ng0n1", 00:17:43.262 "name": "xnvme_bdev" 00:17:43.262 }, 00:17:43.262 "method": "bdev_xnvme_create" 00:17:43.262 }, 00:17:43.262 { 00:17:43.262 "method": "bdev_wait_for_examine" 00:17:43.262 } 00:17:43.262 ] 00:17:43.262 } 00:17:43.262 ] 00:17:43.262 } 00:17:43.520 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:43.520 fio-3.35 00:17:43.520 Starting 1 thread 00:17:50.104 00:17:50.104 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73726: Tue Nov 26 19:02:20 2024 00:17:50.104 write: IOPS=45.2k, BW=177MiB/s (185MB/s)(883MiB/5001msec); 0 zone resets 00:17:50.104 slat (usec): min=2, max=616, avg= 4.65, stdev= 4.29 00:17:50.104 clat (usec): min=111, max=9713, avg=1232.18, stdev=368.03 00:17:50.104 lat (usec): min=116, max=9721, avg=1236.83, stdev=368.65 00:17:50.104 clat percentiles (usec): 00:17:50.104 | 1.00th=[ 775], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 1004], 00:17:50.104 | 30.00th=[ 1057], 40.00th=[ 1106], 50.00th=[ 1172], 60.00th=[ 1221], 00:17:50.104 | 70.00th=[ 1303], 80.00th=[ 1418], 90.00th=[ 1582], 95.00th=[ 1762], 00:17:50.104 | 99.00th=[ 2442], 99.50th=[ 2900], 99.90th=[ 4228], 99.95th=[ 6652], 00:17:50.104 | 99.99th=[ 9634] 00:17:50.104 bw ( KiB/s): min=170656, max=206336, per=100.00%, avg=182507.56, stdev=12164.51, samples=9 00:17:50.104 iops : min=42664, max=51584, avg=45626.89, stdev=3041.13, samples=9 00:17:50.104 lat (usec) : 250=0.04%, 500=0.21%, 750=0.60%, 1000=18.90% 00:17:50.104 lat (msec) : 2=77.99%, 4=2.14%, 10=0.12% 00:17:50.104 cpu : usr=65.72%, sys=29.16%, ctx=9, majf=0, minf=763 00:17:50.104 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=12.0%, 16=24.4%, 32=51.6%, >=64=1.7% 00:17:50.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.104 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:50.104 issued rwts: total=0,226037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:50.104 00:17:50.104 Run status group 0 (all jobs): 00:17:50.104 WRITE: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=883MiB (926MB), run=5001-5001msec 00:17:50.671 ----------------------------------------------------- 00:17:50.671 Suppressions used: 00:17:50.671 count bytes template 00:17:50.671 1 11 /usr/src/fio/parse.c 00:17:50.671 1 8 libtcmalloc_minimal.so 00:17:50.671 1 904 libcrypto.so 00:17:50.671 ----------------------------------------------------- 00:17:50.671 00:17:50.671 00:17:50.671 real 0m14.662s 00:17:50.671 user 0m10.764s 00:17:50.671 sys 0m3.224s 00:17:50.671 19:02:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.671 19:02:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:50.671 ************************************ 00:17:50.671 END TEST xnvme_fio_plugin 00:17:50.671 ************************************ 00:17:50.671 19:02:21 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73207 00:17:50.671 19:02:21 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73207 ']' 00:17:50.671 19:02:21 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73207 00:17:50.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73207) - No such process 00:17:50.671 19:02:21 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73207 is not found' 00:17:50.671 Process with pid 73207 is not found 00:17:50.671 ************************************ 00:17:50.671 END TEST nvme_xnvme 00:17:50.671 ************************************ 00:17:50.671 00:17:50.671 real 3m45.867s 00:17:50.671 user 2m17.400s 00:17:50.671 sys 1m12.571s 00:17:50.671 19:02:21 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.671 19:02:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:50.671 19:02:21 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:50.671 19:02:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.671 19:02:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.671 19:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:50.671 ************************************ 00:17:50.671 START TEST blockdev_xnvme 00:17:50.671 ************************************ 00:17:50.671 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:50.930 * Looking for test storage... 00:17:50.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:50.930 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:50.930 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:17:50.930 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:50.930 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:50.930 19:02:21 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.931 19:02:21 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:17:50.931 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.931 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:50.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.931 --rc genhtml_branch_coverage=1 00:17:50.931 --rc genhtml_function_coverage=1 00:17:50.931 --rc genhtml_legend=1 00:17:50.931 --rc geninfo_all_blocks=1 00:17:50.931 --rc geninfo_unexecuted_blocks=1 00:17:50.931 00:17:50.931 ' 00:17:50.931 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:50.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.931 --rc genhtml_branch_coverage=1 00:17:50.931 --rc genhtml_function_coverage=1 00:17:50.931 --rc genhtml_legend=1 00:17:50.931 --rc geninfo_all_blocks=1 00:17:50.931 --rc geninfo_unexecuted_blocks=1 00:17:50.931 00:17:50.931 ' 00:17:50.931 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:50.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.931 --rc genhtml_branch_coverage=1 00:17:50.931 --rc genhtml_function_coverage=1 00:17:50.931 --rc genhtml_legend=1 00:17:50.931 --rc geninfo_all_blocks=1 00:17:50.931 --rc geninfo_unexecuted_blocks=1 00:17:50.931 00:17:50.931 ' 00:17:50.931 19:02:21 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:50.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.931 --rc genhtml_branch_coverage=1 00:17:50.931 --rc genhtml_function_coverage=1 00:17:50.931 --rc genhtml_legend=1 00:17:50.931 --rc geninfo_all_blocks=1 00:17:50.931 --rc geninfo_unexecuted_blocks=1 00:17:50.931 00:17:50.931 ' 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:50.931 19:02:21 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73865 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:50.931 19:02:22 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73865 00:17:50.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.931 19:02:22 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73865 ']' 00:17:50.931 19:02:22 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.931 19:02:22 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.931 19:02:22 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.931 19:02:22 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.931 19:02:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:50.931 [2024-11-26 19:02:22.136457] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:50.931 [2024-11-26 19:02:22.136823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73865 ] 00:17:51.190 [2024-11-26 19:02:22.338661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.448 [2024-11-26 19:02:22.442742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.015 19:02:23 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.015 19:02:23 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:17:52.015 19:02:23 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:52.015 19:02:23 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:17:52.015 19:02:23 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:52.015 19:02:23 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:52.015 19:02:23 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:52.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.151 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:53.151 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:53.151 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:17:53.151 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:17:53.151 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:17:53.151 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:17:53.152 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:53.152 19:02:24 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:53.152 19:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.152 19:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.152 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:17:53.152 nvme0n1 00:17:53.152 nvme0n2 00:17:53.152 nvme0n3 00:17:53.411 nvme1n1 00:17:53.411 nvme2n1 00:17:53.411 nvme3n1 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d331aac0-b700-4c85-a7aa-d51fd0e3d1d6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d331aac0-b700-4c85-a7aa-d51fd0e3d1d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "d9044895-4d9d-4b3f-9a38-b51ea4b4426c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9044895-4d9d-4b3f-9a38-b51ea4b4426c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b91a51d4-cc3f-4ef6-950f-2788172eacf5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b91a51d4-cc3f-4ef6-950f-2788172eacf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "335e6672-c02b-4598-961f-2ca3487f8fd5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "335e6672-c02b-4598-961f-2ca3487f8fd5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4e590596-c64a-4dc2-be8a-e651e3eba87a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4e590596-c64a-4dc2-be8a-e651e3eba87a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cb4cf557-ecdc-44d8-b58c-bed649849a37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb4cf557-ecdc-44d8-b58c-bed649849a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:53.411 19:02:24 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73865 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73865 ']' 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73865 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73865 00:17:53.411 killing process with pid 73865 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73865' 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73865 00:17:53.411 19:02:24 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73865 00:17:55.941 19:02:26 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:55.941 19:02:26 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:55.941 19:02:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:55.941 19:02:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.941 19:02:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:55.941 ************************************ 00:17:55.941 START TEST bdev_hello_world 00:17:55.941 ************************************ 00:17:55.941 19:02:26 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:55.941 [2024-11-26 19:02:26.859902] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:55.941 [2024-11-26 19:02:26.860097] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:17:55.941 [2024-11-26 19:02:27.048955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.200 [2024-11-26 19:02:27.179848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.458 [2024-11-26 19:02:27.636816] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:56.458 [2024-11-26 19:02:27.636885] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:56.458 [2024-11-26 19:02:27.636921] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:56.458 [2024-11-26 19:02:27.639292] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:56.458 [2024-11-26 19:02:27.639677] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:56.458 [2024-11-26 19:02:27.639722] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:56.458 [2024-11-26 19:02:27.639885] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:56.458 00:17:56.458 [2024-11-26 19:02:27.639916] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:57.834 00:17:57.834 ************************************ 00:17:57.834 END TEST bdev_hello_world 00:17:57.834 ************************************ 00:17:57.834 real 0m1.963s 00:17:57.834 user 0m1.596s 00:17:57.834 sys 0m0.247s 00:17:57.834 19:02:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.834 19:02:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:57.834 19:02:28 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:57.834 19:02:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.834 19:02:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.834 19:02:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:57.834 ************************************ 00:17:57.834 START TEST bdev_bounds 00:17:57.834 ************************************ 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74187 00:17:57.834 Process bdevio pid: 74187 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74187' 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74187 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74187 ']' 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.834 19:02:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:57.834 [2024-11-26 19:02:28.827362] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:17:57.834 [2024-11-26 19:02:28.827520] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74187 ] 00:17:57.834 [2024-11-26 19:02:29.005876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:58.092 [2024-11-26 19:02:29.131069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.092 [2024-11-26 19:02:29.131156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.092 [2024-11-26 19:02:29.131159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.027 19:02:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.027 19:02:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:59.027 19:02:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:59.027 I/O targets: 00:17:59.027 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:59.027 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:59.027 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:59.027 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:59.027 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:59.027 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:59.027 00:17:59.027 00:17:59.027 CUnit - A unit testing framework for C - Version 2.1-3 00:17:59.027 http://cunit.sourceforge.net/ 00:17:59.027 00:17:59.027 00:17:59.027 Suite: bdevio tests on: nvme3n1 00:17:59.027 Test: blockdev write read block ...passed 00:17:59.027 Test: blockdev write zeroes read block ...passed 00:17:59.027 Test: blockdev write zeroes read no split ...passed 00:17:59.027 Test: blockdev write zeroes read split ...passed 00:17:59.027 Test: blockdev write zeroes read split partial ...passed 00:17:59.027 Test: blockdev reset ...passed 00:17:59.027 Test: blockdev write read 8 blocks ...passed 00:17:59.027 Test: blockdev write read size > 128k ...passed 00:17:59.027 Test: blockdev write read invalid size ...passed 00:17:59.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.027 Test: blockdev write read max offset ...passed 00:17:59.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.027 Test: blockdev writev readv 8 blocks ...passed 00:17:59.027 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.027 Test: blockdev writev readv block ...passed 00:17:59.027 Test: blockdev writev readv size > 128k ...passed 00:17:59.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.027 Test: blockdev comparev and writev ...passed 00:17:59.027 Test: blockdev nvme passthru rw ...passed 00:17:59.027 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.027 Test: blockdev nvme admin passthru ...passed 00:17:59.027 Test: blockdev copy ...passed 00:17:59.027 Suite: bdevio tests on: nvme2n1 00:17:59.027 Test: blockdev write read block ...passed 00:17:59.027 Test: blockdev write zeroes read block ...passed 00:17:59.027 Test: blockdev write zeroes read no split ...passed 00:17:59.027 Test: blockdev write zeroes read split ...passed 00:17:59.027 Test: blockdev write zeroes read split partial ...passed 00:17:59.027 Test: blockdev reset ...passed 00:17:59.027 Test: blockdev write read 8 blocks ...passed 00:17:59.027 Test: blockdev write read size > 128k ...passed 00:17:59.028 Test: blockdev write read invalid size ...passed 00:17:59.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.028 Test: blockdev write read max offset ...passed 00:17:59.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.028 Test: blockdev writev readv 8 blocks ...passed 00:17:59.028 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.028 Test: blockdev writev readv block ...passed 00:17:59.028 Test: blockdev writev readv size > 128k ...passed 00:17:59.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.028 Test: blockdev comparev and writev ...passed 00:17:59.028 Test: blockdev nvme passthru rw ...passed 00:17:59.028 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.028 Test: blockdev nvme admin passthru ...passed 00:17:59.028 Test: blockdev copy ...passed 00:17:59.028 Suite: bdevio tests on: nvme1n1 00:17:59.028 Test: blockdev write read block ...passed 00:17:59.028 Test: blockdev write zeroes read block ...passed 00:17:59.028 Test: blockdev write zeroes read no split ...passed 00:17:59.028 Test: blockdev write zeroes read split ...passed 00:17:59.286 Test: blockdev write zeroes read split partial ...passed 00:17:59.286 Test: blockdev reset ...passed 00:17:59.286 Test: blockdev write read 8 blocks ...passed 00:17:59.286 Test: blockdev write read size > 128k ...passed 00:17:59.286 Test: blockdev write read invalid size ...passed 00:17:59.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.286 Test: blockdev write read max offset ...passed 00:17:59.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.286 Test: blockdev writev readv 8 blocks ...passed 00:17:59.286 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.286 Test: blockdev writev readv block ...passed 00:17:59.286 Test: blockdev writev readv size > 128k ...passed 00:17:59.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.286 Test: blockdev comparev and writev ...passed 00:17:59.286 Test: blockdev nvme passthru rw ...passed 00:17:59.286 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.286 Test: blockdev nvme admin passthru ...passed 00:17:59.286 Test: blockdev copy ...passed 00:17:59.286 Suite: bdevio tests on: nvme0n3 00:17:59.286 Test: blockdev write read block ...passed 00:17:59.286 Test: blockdev write zeroes read block ...passed 00:17:59.286 Test: blockdev write zeroes read no split ...passed 00:17:59.286 Test: blockdev write zeroes read split ...passed 00:17:59.286 Test: blockdev write zeroes read split partial ...passed 00:17:59.286 Test: blockdev reset ...passed 00:17:59.286 Test: blockdev write read 8 blocks ...passed 00:17:59.286 Test: blockdev write read size > 128k ...passed 00:17:59.286 Test: blockdev write read invalid size ...passed 00:17:59.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.286 Test: blockdev write read max offset ...passed 00:17:59.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.286 Test: blockdev writev readv 8 blocks ...passed 00:17:59.286 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.286 Test: blockdev writev readv block ...passed 00:17:59.286 Test: blockdev writev readv size > 128k ...passed 00:17:59.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.286 Test: blockdev comparev and writev ...passed 00:17:59.286 Test: blockdev nvme passthru rw ...passed 00:17:59.286 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.286 Test: blockdev nvme admin passthru ...passed 00:17:59.286 Test: blockdev copy ...passed 00:17:59.286 Suite: bdevio tests on: nvme0n2 00:17:59.286 Test: blockdev write read block ...passed 00:17:59.286 Test: blockdev write zeroes read block ...passed 00:17:59.286 Test: blockdev write zeroes read no split ...passed 00:17:59.286 Test: blockdev write zeroes read split ...passed 00:17:59.286 Test: blockdev write zeroes read split partial ...passed 00:17:59.286 Test: blockdev reset ...passed 00:17:59.286 Test: blockdev write read 8 blocks ...passed 00:17:59.286 Test: blockdev write read size > 128k ...passed 00:17:59.286 Test: blockdev write read invalid size ...passed 00:17:59.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.287 Test: blockdev write read max offset ...passed 00:17:59.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.287 Test: blockdev writev readv 8 blocks ...passed 00:17:59.287 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.287 Test: blockdev writev readv block ...passed 00:17:59.287 Test: blockdev writev readv size > 128k ...passed 00:17:59.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.287 Test: blockdev comparev and writev ...passed 00:17:59.287 Test: blockdev nvme passthru rw ...passed 00:17:59.287 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.287 Test: blockdev nvme admin passthru ...passed 00:17:59.287 Test: blockdev copy ...passed 00:17:59.287 Suite: bdevio tests on: nvme0n1 00:17:59.287 Test: blockdev write read block ...passed 00:17:59.287 Test: blockdev write zeroes read block ...passed 00:17:59.287 Test: blockdev write zeroes read no split ...passed 00:17:59.287 Test: blockdev write zeroes read split ...passed 00:17:59.287 Test: blockdev write zeroes read split partial ...passed 00:17:59.287 Test: blockdev reset ...passed 00:17:59.287 Test: blockdev write read 8 blocks ...passed 00:17:59.287 Test: blockdev write read size > 128k ...passed 00:17:59.287 Test: blockdev write read invalid size ...passed 00:17:59.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.287 Test: blockdev write read max offset ...passed 00:17:59.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.287 Test: blockdev writev readv 8 blocks ...passed 00:17:59.287 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.287 Test: blockdev writev readv block ...passed 00:17:59.287 Test: blockdev writev readv size > 128k ...passed 00:17:59.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.287 Test: blockdev comparev and writev ...passed 00:17:59.287 Test: blockdev nvme passthru rw ...passed 00:17:59.287 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.287 Test: blockdev nvme admin passthru ...passed 00:17:59.287 Test: blockdev copy ...passed 00:17:59.287 00:17:59.287 Run Summary: Type Total Ran Passed Failed Inactive 00:17:59.287 suites 6 6 n/a 0 0 00:17:59.287 tests 138 138 138 0 0 00:17:59.287 asserts 780 780 780 0 n/a 00:17:59.287 00:17:59.287 Elapsed time = 1.339 seconds 00:17:59.287 0 00:17:59.545 19:02:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74187 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74187 ']' 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74187 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74187 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74187' 00:17:59.546 killing process with pid 74187 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74187 00:17:59.546 19:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74187 00:18:00.482 19:02:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:00.482 ************************************ 00:18:00.482 END TEST bdev_bounds 00:18:00.482 ************************************ 00:18:00.482 00:18:00.482 real 0m2.826s 00:18:00.482 user 0m7.342s 00:18:00.482 sys 0m0.320s 00:18:00.482 19:02:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.482 19:02:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:00.482 19:02:31 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:00.482 19:02:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:00.482 19:02:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.482 19:02:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:00.482 ************************************ 00:18:00.482 START TEST bdev_nbd 00:18:00.482 ************************************ 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74253 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74253 /var/tmp/spdk-nbd.sock 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74253 ']' 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.482 19:02:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:00.741 [2024-11-26 19:02:31.749819] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:00.741 [2024-11-26 19:02:31.749979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.741 [2024-11-26 19:02:31.933006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.000 [2024-11-26 19:02:32.041269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:01.566 19:02:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.140 1+0 records in 00:18:02.140 1+0 records out 00:18:02.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547882 s, 7.5 MB/s 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.140 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:18:02.399 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:02.399 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:02.399 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:02.399 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:02.399 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.400 1+0 records in 00:18:02.400 1+0 records out 00:18:02.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582453 s, 7.0 MB/s 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.400 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.658 1+0 records in 00:18:02.658 1+0 records out 00:18:02.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753393 s, 5.4 MB/s 00:18:02.658 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.917 19:02:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:03.176 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:03.176 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:03.176 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:03.176 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:03.176 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:03.176 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.177 1+0 records in 00:18:03.177 1+0 records out 00:18:03.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729886 s, 5.6 MB/s 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:03.177 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.435 1+0 records in 00:18:03.435 1+0 records out 00:18:03.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945364 s, 4.3 MB/s 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:03.435 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.002 1+0 records in 00:18:04.002 1+0 records out 00:18:04.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546985 s, 7.5 MB/s 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:04.002 19:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd0", 00:18:04.261 "bdev_name": "nvme0n1" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd1", 00:18:04.261 "bdev_name": "nvme0n2" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd2", 00:18:04.261 "bdev_name": "nvme0n3" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd3", 00:18:04.261 "bdev_name": "nvme1n1" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd4", 00:18:04.261 "bdev_name": "nvme2n1" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd5", 00:18:04.261 "bdev_name": "nvme3n1" 00:18:04.261 } 00:18:04.261 ]' 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd0", 00:18:04.261 "bdev_name": "nvme0n1" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd1", 00:18:04.261 "bdev_name": "nvme0n2" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd2", 00:18:04.261 "bdev_name": "nvme0n3" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd3", 00:18:04.261 "bdev_name": "nvme1n1" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd4", 00:18:04.261 "bdev_name": "nvme2n1" 00:18:04.261 }, 00:18:04.261 { 00:18:04.261 "nbd_device": "/dev/nbd5", 00:18:04.261 "bdev_name": "nvme3n1" 00:18:04.261 } 00:18:04.261 ]' 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.261 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:04.519 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.520 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.778 19:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.345 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.604 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.862 19:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.121 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:06.379 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:06.637 /dev/nbd0 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.910 1+0 records in 00:18:06.910 1+0 records out 00:18:06.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613749 s, 6.7 MB/s 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:06.910 19:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:18:07.169 /dev/nbd1 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.169 1+0 records in 00:18:07.169 1+0 records out 00:18:07.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564341 s, 7.3 MB/s 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:07.169 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:18:07.428 /dev/nbd10 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.428 1+0 records in 00:18:07.428 1+0 records out 00:18:07.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642946 s, 6.4 MB/s 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:07.428 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:18:07.686 /dev/nbd11 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.686 1+0 records in 00:18:07.686 1+0 records out 00:18:07.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577517 s, 7.1 MB/s 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:07.686 19:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:18:07.943 /dev/nbd12 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.943 1+0 records in 00:18:07.943 1+0 records out 00:18:07.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590352 s, 6.9 MB/s 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:07.943 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:08.510 /dev/nbd13 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.510 1+0 records in 00:18:08.510 1+0 records out 00:18:08.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683728 s, 6.0 MB/s 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.510 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd0", 00:18:08.769 "bdev_name": "nvme0n1" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd1", 00:18:08.769 "bdev_name": "nvme0n2" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd10", 00:18:08.769 "bdev_name": "nvme0n3" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd11", 00:18:08.769 "bdev_name": "nvme1n1" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd12", 00:18:08.769 "bdev_name": "nvme2n1" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd13", 00:18:08.769 "bdev_name": "nvme3n1" 00:18:08.769 } 00:18:08.769 ]' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd0", 00:18:08.769 "bdev_name": "nvme0n1" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd1", 00:18:08.769 "bdev_name": "nvme0n2" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd10", 00:18:08.769 "bdev_name": "nvme0n3" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd11", 00:18:08.769 "bdev_name": "nvme1n1" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd12", 00:18:08.769 "bdev_name": "nvme2n1" 00:18:08.769 }, 00:18:08.769 { 00:18:08.769 "nbd_device": "/dev/nbd13", 00:18:08.769 "bdev_name": "nvme3n1" 00:18:08.769 } 00:18:08.769 ]' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:08.769 /dev/nbd1 00:18:08.769 /dev/nbd10 00:18:08.769 /dev/nbd11 00:18:08.769 /dev/nbd12 00:18:08.769 /dev/nbd13' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:08.769 /dev/nbd1 00:18:08.769 /dev/nbd10 00:18:08.769 /dev/nbd11 00:18:08.769 /dev/nbd12 00:18:08.769 /dev/nbd13' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:08.769 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:08.769 256+0 records in 00:18:08.769 256+0 records out 00:18:08.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658946 s, 159 MB/s 00:18:08.770 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:08.770 19:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:09.027 256+0 records in 00:18:09.027 256+0 records out 00:18:09.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166705 s, 6.3 MB/s 00:18:09.027 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.027 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:09.027 256+0 records in 00:18:09.027 256+0 records out 00:18:09.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124532 s, 8.4 MB/s 00:18:09.027 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.027 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:09.284 256+0 records in 00:18:09.284 256+0 records out 00:18:09.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125613 s, 8.3 MB/s 00:18:09.284 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.284 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:09.284 256+0 records in 00:18:09.284 256+0 records out 00:18:09.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147379 s, 7.1 MB/s 00:18:09.284 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.284 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:09.543 256+0 records in 00:18:09.543 256+0 records out 00:18:09.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134586 s, 7.8 MB/s 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:09.543 256+0 records in 00:18:09.543 256+0 records out 00:18:09.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127967 s, 8.2 MB/s 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:09.543 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.800 19:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.056 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:10.620 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:10.620 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.621 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.878 19:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.136 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:11.393 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:11.393 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:11.393 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:11.393 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.393 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.393 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:11.394 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:11.394 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.394 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.394 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.018 19:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:12.276 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:12.534 malloc_lvol_verify 00:18:12.534 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:12.792 753b309a-8aeb-488d-9d84-719ec764ed03 00:18:12.792 19:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:13.358 38956377-6a6b-4dab-b51a-f115eda0e76f 00:18:13.358 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:13.618 /dev/nbd0 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:13.618 mke2fs 1.47.0 (5-Feb-2023) 00:18:13.618 Discarding device blocks: 0/4096 done 00:18:13.618 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:13.618 00:18:13.618 Allocating group tables: 0/1 done 00:18:13.618 Writing inode tables: 0/1 done 00:18:13.618 Creating journal (1024 blocks): done 00:18:13.618 Writing superblocks and filesystem accounting information: 0/1 done 00:18:13.618 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.618 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74253 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74253 ']' 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74253 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74253 00:18:13.876 killing process with pid 74253 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74253' 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74253 00:18:13.876 19:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74253 00:18:14.840 ************************************ 00:18:14.840 END TEST bdev_nbd 00:18:14.840 ************************************ 00:18:14.840 19:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:14.840 00:18:14.840 real 0m14.420s 00:18:14.840 user 0m20.980s 00:18:14.840 sys 0m4.589s 00:18:14.840 19:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.840 19:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:15.099 19:02:46 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:15.099 19:02:46 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:18:15.099 19:02:46 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:18:15.099 19:02:46 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:15.099 19:02:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:15.099 19:02:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.099 19:02:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:15.099 ************************************ 00:18:15.099 START TEST bdev_fio 00:18:15.099 ************************************ 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:15.099 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:15.099 ************************************ 00:18:15.099 START TEST bdev_fio_rw_verify 00:18:15.099 ************************************ 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:15.099 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:15.100 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:15.100 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.100 19:02:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:15.358 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:15.358 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:15.358 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:15.358 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:15.358 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:15.358 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:15.358 fio-3.35 00:18:15.358 Starting 6 threads 00:18:27.553 00:18:27.553 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74690: Tue Nov 26 19:02:57 2024 00:18:27.553 read: IOPS=25.8k, BW=101MiB/s (106MB/s)(1008MiB/10001msec) 00:18:27.553 slat (usec): min=3, max=2922, avg= 7.41, stdev= 8.77 00:18:27.553 clat (usec): min=142, max=14559, avg=721.87, stdev=410.25 00:18:27.553 lat (usec): min=148, max=14572, avg=729.28, stdev=410.83 00:18:27.553 clat percentiles (usec): 00:18:27.553 | 50.000th=[ 701], 99.000th=[ 1811], 99.900th=[ 5145], 99.990th=[ 9372], 00:18:27.553 | 99.999th=[14484] 00:18:27.553 write: IOPS=26.1k, BW=102MiB/s (107MB/s)(1021MiB/10001msec); 0 zone resets 00:18:27.553 slat (usec): min=14, max=4962, avg=29.98, stdev=38.94 00:18:27.553 clat (usec): min=80, max=15039, avg=805.88, stdev=460.32 00:18:27.553 lat (usec): min=119, max=15080, avg=835.85, stdev=463.48 00:18:27.553 clat percentiles (usec): 00:18:27.553 | 50.000th=[ 766], 99.000th=[ 2147], 99.900th=[ 5735], 99.990th=[14615], 00:18:27.553 | 99.999th=[15008] 00:18:27.553 bw ( KiB/s): min=69576, max=131464, per=99.86%, avg=104430.42, stdev=2508.18, samples=114 00:18:27.553 iops : min=17394, max=32866, avg=26107.16, stdev=627.06, samples=114 00:18:27.553 lat (usec) : 100=0.01%, 250=2.09%, 500=18.39%, 750=32.12%, 1000=32.51% 00:18:27.553 lat (msec) : 2=13.89%, 4=0.70%, 10=0.27%, 20=0.01% 00:18:27.553 cpu : usr=59.94%, sys=26.05%, ctx=6748, majf=0, minf=22506 00:18:27.553 IO depths : 1=12.1%, 2=24.7%, 4=50.3%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.553 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.554 issued rwts: total=258169,261462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:27.554 00:18:27.554 Run status group 0 (all jobs): 00:18:27.554 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1008MiB (1057MB), run=10001-10001msec 00:18:27.554 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=1021MiB (1071MB), run=10001-10001msec 00:18:27.554 ----------------------------------------------------- 00:18:27.554 Suppressions used: 00:18:27.554 count bytes template 00:18:27.554 6 48 /usr/src/fio/parse.c 00:18:27.554 3114 298944 /usr/src/fio/iolog.c 00:18:27.554 1 8 libtcmalloc_minimal.so 00:18:27.554 1 904 libcrypto.so 00:18:27.554 ----------------------------------------------------- 00:18:27.554 00:18:27.554 00:18:27.554 real 0m12.347s 00:18:27.554 user 0m37.861s 00:18:27.554 sys 0m15.953s 00:18:27.554 ************************************ 00:18:27.554 END TEST bdev_fio_rw_verify 00:18:27.554 ************************************ 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d331aac0-b700-4c85-a7aa-d51fd0e3d1d6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d331aac0-b700-4c85-a7aa-d51fd0e3d1d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "d9044895-4d9d-4b3f-9a38-b51ea4b4426c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9044895-4d9d-4b3f-9a38-b51ea4b4426c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b91a51d4-cc3f-4ef6-950f-2788172eacf5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b91a51d4-cc3f-4ef6-950f-2788172eacf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "335e6672-c02b-4598-961f-2ca3487f8fd5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "335e6672-c02b-4598-961f-2ca3487f8fd5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4e590596-c64a-4dc2-be8a-e651e3eba87a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4e590596-c64a-4dc2-be8a-e651e3eba87a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cb4cf557-ecdc-44d8-b58c-bed649849a37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb4cf557-ecdc-44d8-b58c-bed649849a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:27.554 /home/vagrant/spdk_repo/spdk 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:27.554 00:18:27.554 real 0m12.547s 00:18:27.554 user 0m37.956s 00:18:27.554 sys 0m16.052s 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.554 19:02:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:27.554 ************************************ 00:18:27.554 END TEST bdev_fio 00:18:27.554 ************************************ 00:18:27.554 19:02:58 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:27.554 19:02:58 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:27.554 19:02:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:27.554 19:02:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.554 19:02:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:27.554 ************************************ 00:18:27.554 START TEST bdev_verify 00:18:27.554 ************************************ 00:18:27.554 19:02:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:27.554 [2024-11-26 19:02:58.757656] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:27.554 [2024-11-26 19:02:58.757827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74865 ] 00:18:27.813 [2024-11-26 19:02:58.936464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:28.071 [2024-11-26 19:02:59.070124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.071 [2024-11-26 19:02:59.070131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.329 Running I/O for 5 seconds... 00:18:30.637 20672.00 IOPS, 80.75 MiB/s [2024-11-26T19:03:03.226Z] 20944.00 IOPS, 81.81 MiB/s [2024-11-26T19:03:04.161Z] 21248.00 IOPS, 83.00 MiB/s [2024-11-26T19:03:04.726Z] 20904.00 IOPS, 81.66 MiB/s 00:18:33.511 Latency(us) 00:18:33.511 [2024-11-26T19:03:04.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.511 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.511 Verification LBA range: start 0x0 length 0x80000 00:18:33.511 nvme0n1 : 5.08 1563.15 6.11 0.00 0.00 81736.38 13047.62 106287.48 00:18:33.511 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x80000 length 0x80000 00:18:33.512 nvme0n1 : 5.06 1492.74 5.83 0.00 0.00 85584.73 9115.46 112960.23 00:18:33.512 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x0 length 0x80000 00:18:33.512 nvme0n2 : 5.08 1561.83 6.10 0.00 0.00 81670.58 17635.14 98184.84 00:18:33.512 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x80000 length 0x80000 00:18:33.512 nvme0n2 : 5.04 1473.70 5.76 0.00 0.00 86531.87 11319.85 108193.98 00:18:33.512 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x0 length 0x80000 00:18:33.512 nvme0n3 : 5.09 1560.49 6.10 0.00 0.00 81605.45 13166.78 104380.97 00:18:33.512 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x80000 length 0x80000 00:18:33.512 nvme0n3 : 5.06 1491.96 5.83 0.00 0.00 85294.72 10724.07 107240.73 00:18:33.512 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x0 length 0xbd0bd 00:18:33.512 nvme1n1 : 5.10 2642.44 10.32 0.00 0.00 47991.94 4617.31 80073.08 00:18:33.512 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:33.512 nvme1n1 : 5.06 2515.44 9.83 0.00 0.00 50397.56 5391.83 70540.57 00:18:33.512 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x0 length 0xa0000 00:18:33.512 nvme2n1 : 5.07 1566.44 6.12 0.00 0.00 80946.42 7804.74 106764.10 00:18:33.512 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0xa0000 length 0xa0000 00:18:33.512 nvme2n1 : 5.07 1516.20 5.92 0.00 0.00 83569.74 9055.88 115343.36 00:18:33.512 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x0 length 0x20000 00:18:33.512 nvme3n1 : 5.07 1564.51 6.11 0.00 0.00 80919.72 9055.88 88652.33 00:18:33.512 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.512 Verification LBA range: start 0x20000 length 0x20000 00:18:33.512 nvme3n1 : 5.07 1490.33 5.82 0.00 0.00 84878.33 11379.43 104380.97 00:18:33.512 [2024-11-26T19:03:04.727Z] =================================================================================================================== 00:18:33.512 [2024-11-26T19:03:04.727Z] Total : 20439.23 79.84 0.00 0.00 74609.35 4617.31 115343.36 00:18:34.884 00:18:34.884 real 0m7.286s 00:18:34.884 user 0m11.576s 00:18:34.884 sys 0m1.721s 00:18:34.884 19:03:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.884 ************************************ 00:18:34.884 END TEST bdev_verify 00:18:34.884 19:03:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:34.884 ************************************ 00:18:34.884 19:03:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:34.884 19:03:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:34.884 19:03:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.884 19:03:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:34.884 ************************************ 00:18:34.884 START TEST bdev_verify_big_io 00:18:34.884 ************************************ 00:18:34.884 19:03:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:35.141 [2024-11-26 19:03:06.108643] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:35.141 [2024-11-26 19:03:06.108811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74963 ] 00:18:35.141 [2024-11-26 19:03:06.293080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.399 [2024-11-26 19:03:06.415165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.399 [2024-11-26 19:03:06.415166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.964 Running I/O for 5 seconds... 00:18:42.050 1443.00 IOPS, 90.19 MiB/s [2024-11-26T19:03:13.265Z] 3705.50 IOPS, 231.59 MiB/s 00:18:42.050 Latency(us) 00:18:42.050 [2024-11-26T19:03:13.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.050 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x0 length 0x8000 00:18:42.050 nvme0n1 : 6.05 97.86 6.12 0.00 0.00 1247555.86 139174.63 1304047.24 00:18:42.050 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x8000 length 0x8000 00:18:42.050 nvme0n1 : 6.06 142.55 8.91 0.00 0.00 854880.71 85792.58 1052389.00 00:18:42.050 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x0 length 0x8000 00:18:42.050 nvme0n2 : 6.06 137.20 8.58 0.00 0.00 855743.19 62437.93 861738.82 00:18:42.050 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x8000 length 0x8000 00:18:42.050 nvme0n2 : 6.08 107.94 6.75 0.00 0.00 1107338.98 110100.48 907494.87 00:18:42.050 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x0 length 0x8000 00:18:42.050 nvme0n3 : 6.05 117.64 7.35 0.00 0.00 989830.57 145847.39 1311673.25 00:18:42.050 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x8000 length 0x8000 00:18:42.050 nvme0n3 : 6.06 130.62 8.16 0.00 0.00 889375.27 90082.21 899868.86 00:18:42.050 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x0 length 0xbd0b 00:18:42.050 nvme1n1 : 6.08 89.49 5.59 0.00 0.00 1253717.12 6255.71 1799737.72 00:18:42.050 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:42.050 nvme1n1 : 6.07 105.42 6.59 0.00 0.00 1067460.84 15132.86 1860745.77 00:18:42.050 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x0 length 0xa000 00:18:42.050 nvme2n1 : 6.07 130.46 8.15 0.00 0.00 854513.76 9413.35 1365055.30 00:18:42.050 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0xa000 length 0xa000 00:18:42.050 nvme2n1 : 6.08 119.68 7.48 0.00 0.00 908441.08 8162.21 2226794.12 00:18:42.050 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x0 length 0x2000 00:18:42.050 nvme3n1 : 6.08 144.62 9.04 0.00 0.00 746973.20 9830.40 1204909.15 00:18:42.050 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:42.050 Verification LBA range: start 0x2000 length 0x2000 00:18:42.050 nvme3n1 : 6.09 97.73 6.11 0.00 0.00 1089112.22 2606.55 3492711.33 00:18:42.050 [2024-11-26T19:03:13.265Z] =================================================================================================================== 00:18:42.050 [2024-11-26T19:03:13.265Z] Total : 1421.22 88.83 0.00 0.00 965965.29 2606.55 3492711.33 00:18:43.425 00:18:43.425 real 0m8.487s 00:18:43.425 user 0m15.498s 00:18:43.425 sys 0m0.491s 00:18:43.425 19:03:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.425 19:03:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.425 ************************************ 00:18:43.425 END TEST bdev_verify_big_io 00:18:43.425 ************************************ 00:18:43.425 19:03:14 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.425 19:03:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:43.425 19:03:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.425 19:03:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:43.425 ************************************ 00:18:43.425 START TEST bdev_write_zeroes 00:18:43.425 ************************************ 00:18:43.425 19:03:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.684 [2024-11-26 19:03:14.652727] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:43.684 [2024-11-26 19:03:14.652932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75075 ] 00:18:43.684 [2024-11-26 19:03:14.841147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.976 [2024-11-26 19:03:14.981316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.235 Running I/O for 1 seconds... 00:18:45.608 66336.00 IOPS, 259.12 MiB/s 00:18:45.608 Latency(us) 00:18:45.608 [2024-11-26T19:03:16.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.608 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:45.608 nvme0n1 : 1.03 10056.26 39.28 0.00 0.00 12714.59 6940.86 27286.81 00:18:45.608 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:45.608 nvme0n2 : 1.02 10173.18 39.74 0.00 0.00 12556.11 6881.28 25022.84 00:18:45.608 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:45.608 nvme0n3 : 1.02 10156.94 39.68 0.00 0.00 12567.99 7030.23 25380.31 00:18:45.608 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:45.608 nvme1n1 : 1.03 15123.88 59.08 0.00 0.00 8430.79 4230.05 20733.21 00:18:45.608 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:45.608 nvme2n1 : 1.03 10087.70 39.41 0.00 0.00 12573.22 6940.86 29074.15 00:18:45.608 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:45.608 nvme3n1 : 1.03 10071.93 39.34 0.00 0.00 12583.71 7000.44 29074.15 00:18:45.608 [2024-11-26T19:03:16.823Z] =================================================================================================================== 00:18:45.608 [2024-11-26T19:03:16.823Z] Total : 65669.88 256.52 0.00 0.00 11638.72 4230.05 29074.15 00:18:46.543 00:18:46.543 real 0m3.015s 00:18:46.543 user 0m2.241s 00:18:46.543 sys 0m0.582s 00:18:46.543 19:03:17 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.543 19:03:17 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:46.543 ************************************ 00:18:46.543 END TEST bdev_write_zeroes 00:18:46.543 ************************************ 00:18:46.543 19:03:17 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:46.543 19:03:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:46.543 19:03:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.543 19:03:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.543 ************************************ 00:18:46.543 START TEST bdev_json_nonenclosed 00:18:46.543 ************************************ 00:18:46.543 19:03:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:46.543 [2024-11-26 19:03:17.730589] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:46.543 [2024-11-26 19:03:17.730802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75129 ] 00:18:46.802 [2024-11-26 19:03:17.912955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.060 [2024-11-26 19:03:18.016749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.060 [2024-11-26 19:03:18.016869] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:47.060 [2024-11-26 19:03:18.016898] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:47.060 [2024-11-26 19:03:18.016911] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:47.319 00:18:47.319 real 0m0.678s 00:18:47.319 user 0m0.420s 00:18:47.319 sys 0m0.151s 00:18:47.319 19:03:18 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.319 ************************************ 00:18:47.319 19:03:18 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:47.319 END TEST bdev_json_nonenclosed 00:18:47.319 ************************************ 00:18:47.319 19:03:18 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:47.319 19:03:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:47.319 19:03:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.319 19:03:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:47.319 ************************************ 00:18:47.319 START TEST bdev_json_nonarray 00:18:47.319 ************************************ 00:18:47.319 19:03:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:47.319 [2024-11-26 19:03:18.415760] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:47.319 [2024-11-26 19:03:18.415927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75160 ] 00:18:47.577 [2024-11-26 19:03:18.600213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.577 [2024-11-26 19:03:18.725991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.577 [2024-11-26 19:03:18.726123] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:47.577 [2024-11-26 19:03:18.726157] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:47.577 [2024-11-26 19:03:18.726197] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:47.836 00:18:47.836 real 0m0.710s 00:18:47.836 user 0m0.481s 00:18:47.836 sys 0m0.121s 00:18:47.836 19:03:19 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.836 19:03:19 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:47.836 ************************************ 00:18:47.836 END TEST bdev_json_nonarray 00:18:47.836 ************************************ 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:48.094 19:03:19 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:48.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:49.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.313 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.313 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.313 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.313 00:18:49.313 real 0m58.582s 00:18:49.313 user 1m44.319s 00:18:49.313 sys 0m26.697s 00:18:49.313 19:03:20 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.313 19:03:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.314 ************************************ 00:18:49.314 END TEST blockdev_xnvme 00:18:49.314 ************************************ 00:18:49.314 19:03:20 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:49.314 19:03:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.314 19:03:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.314 19:03:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.314 ************************************ 00:18:49.314 START TEST ublk 00:18:49.314 ************************************ 00:18:49.314 19:03:20 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:49.573 * Looking for test storage... 00:18:49.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:49.573 19:03:20 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:49.573 19:03:20 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:49.573 19:03:20 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:18:49.573 19:03:20 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.573 19:03:20 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.573 19:03:20 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.573 19:03:20 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.573 19:03:20 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.573 19:03:20 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.573 19:03:20 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:49.573 19:03:20 ublk -- scripts/common.sh@345 -- # : 1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.573 19:03:20 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.573 19:03:20 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@353 -- # local d=1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.573 19:03:20 ublk -- scripts/common.sh@355 -- # echo 1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.573 19:03:20 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@353 -- # local d=2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.573 19:03:20 ublk -- scripts/common.sh@355 -- # echo 2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.573 19:03:20 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.573 19:03:20 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.573 19:03:20 ublk -- scripts/common.sh@368 -- # return 0 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.574 --rc genhtml_branch_coverage=1 00:18:49.574 --rc genhtml_function_coverage=1 00:18:49.574 --rc genhtml_legend=1 00:18:49.574 --rc geninfo_all_blocks=1 00:18:49.574 --rc geninfo_unexecuted_blocks=1 00:18:49.574 00:18:49.574 ' 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.574 --rc genhtml_branch_coverage=1 00:18:49.574 --rc genhtml_function_coverage=1 00:18:49.574 --rc genhtml_legend=1 00:18:49.574 --rc geninfo_all_blocks=1 00:18:49.574 --rc geninfo_unexecuted_blocks=1 00:18:49.574 00:18:49.574 ' 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.574 --rc genhtml_branch_coverage=1 00:18:49.574 --rc genhtml_function_coverage=1 00:18:49.574 --rc genhtml_legend=1 00:18:49.574 --rc geninfo_all_blocks=1 00:18:49.574 --rc geninfo_unexecuted_blocks=1 00:18:49.574 00:18:49.574 ' 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.574 --rc genhtml_branch_coverage=1 00:18:49.574 --rc genhtml_function_coverage=1 00:18:49.574 --rc genhtml_legend=1 00:18:49.574 --rc geninfo_all_blocks=1 00:18:49.574 --rc geninfo_unexecuted_blocks=1 00:18:49.574 00:18:49.574 ' 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:49.574 19:03:20 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:49.574 19:03:20 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:49.574 19:03:20 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:49.574 19:03:20 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:49.574 19:03:20 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:49.574 19:03:20 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:49.574 19:03:20 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:49.574 19:03:20 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:49.574 19:03:20 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.574 19:03:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:49.574 ************************************ 00:18:49.574 START TEST test_save_ublk_config 00:18:49.574 ************************************ 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75443 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75443 00:18:49.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75443 ']' 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.574 19:03:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:49.833 [2024-11-26 19:03:20.799321] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:49.833 [2024-11-26 19:03:20.799473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75443 ] 00:18:49.833 [2024-11-26 19:03:20.971836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.091 [2024-11-26 19:03:21.076601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.027 19:03:21 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.027 19:03:21 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:51.027 19:03:21 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:51.027 19:03:21 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:51.027 19:03:21 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.027 19:03:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:51.027 [2024-11-26 19:03:21.974228] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:51.027 [2024-11-26 19:03:21.975430] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:51.027 malloc0 00:18:51.027 [2024-11-26 19:03:22.070415] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:51.027 [2024-11-26 19:03:22.070558] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:51.027 [2024-11-26 19:03:22.070579] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:51.027 [2024-11-26 19:03:22.070590] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:51.027 [2024-11-26 19:03:22.079354] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:51.027 [2024-11-26 19:03:22.079399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:51.027 [2024-11-26 19:03:22.086238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:51.027 [2024-11-26 19:03:22.086423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:51.027 [2024-11-26 19:03:22.103250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:51.027 0 00:18:51.027 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.027 19:03:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:51.027 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.027 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:51.286 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.286 19:03:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:51.286 "subsystems": [ 00:18:51.286 { 00:18:51.286 "subsystem": "fsdev", 00:18:51.286 "config": [ 00:18:51.286 { 00:18:51.286 "method": "fsdev_set_opts", 00:18:51.286 "params": { 00:18:51.286 "fsdev_io_pool_size": 65535, 00:18:51.286 "fsdev_io_cache_size": 256 00:18:51.286 } 00:18:51.286 } 00:18:51.286 ] 00:18:51.286 }, 00:18:51.286 { 00:18:51.286 "subsystem": "keyring", 00:18:51.286 "config": [] 00:18:51.286 }, 00:18:51.286 { 00:18:51.286 "subsystem": "iobuf", 00:18:51.286 "config": [ 00:18:51.286 { 00:18:51.286 "method": "iobuf_set_options", 00:18:51.286 "params": { 00:18:51.286 "small_pool_count": 8192, 00:18:51.286 "large_pool_count": 1024, 00:18:51.286 "small_bufsize": 8192, 00:18:51.286 "large_bufsize": 135168, 00:18:51.286 "enable_numa": false 00:18:51.286 } 00:18:51.286 } 00:18:51.286 ] 00:18:51.286 }, 00:18:51.286 { 00:18:51.286 "subsystem": "sock", 00:18:51.286 "config": [ 00:18:51.287 { 00:18:51.287 "method": "sock_set_default_impl", 00:18:51.287 "params": { 00:18:51.287 "impl_name": "posix" 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "sock_impl_set_options", 00:18:51.287 "params": { 00:18:51.287 "impl_name": "ssl", 00:18:51.287 "recv_buf_size": 4096, 00:18:51.287 "send_buf_size": 4096, 00:18:51.287 "enable_recv_pipe": true, 00:18:51.287 "enable_quickack": false, 00:18:51.287 "enable_placement_id": 0, 00:18:51.287 "enable_zerocopy_send_server": true, 00:18:51.287 "enable_zerocopy_send_client": false, 00:18:51.287 "zerocopy_threshold": 0, 00:18:51.287 "tls_version": 0, 00:18:51.287 "enable_ktls": false 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "sock_impl_set_options", 00:18:51.287 "params": { 00:18:51.287 "impl_name": "posix", 00:18:51.287 "recv_buf_size": 2097152, 00:18:51.287 "send_buf_size": 2097152, 00:18:51.287 "enable_recv_pipe": true, 00:18:51.287 "enable_quickack": false, 00:18:51.287 "enable_placement_id": 0, 00:18:51.287 "enable_zerocopy_send_server": true, 00:18:51.287 "enable_zerocopy_send_client": false, 00:18:51.287 "zerocopy_threshold": 0, 00:18:51.287 "tls_version": 0, 00:18:51.287 "enable_ktls": false 00:18:51.287 } 00:18:51.287 } 00:18:51.287 ] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "vmd", 00:18:51.287 "config": [] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "accel", 00:18:51.287 "config": [ 00:18:51.287 { 00:18:51.287 "method": "accel_set_options", 00:18:51.287 "params": { 00:18:51.287 "small_cache_size": 128, 00:18:51.287 "large_cache_size": 16, 00:18:51.287 "task_count": 2048, 00:18:51.287 "sequence_count": 2048, 00:18:51.287 "buf_count": 2048 00:18:51.287 } 00:18:51.287 } 00:18:51.287 ] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "bdev", 00:18:51.287 "config": [ 00:18:51.287 { 00:18:51.287 "method": "bdev_set_options", 00:18:51.287 "params": { 00:18:51.287 "bdev_io_pool_size": 65535, 00:18:51.287 "bdev_io_cache_size": 256, 00:18:51.287 "bdev_auto_examine": true, 00:18:51.287 "iobuf_small_cache_size": 128, 00:18:51.287 "iobuf_large_cache_size": 16 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "bdev_raid_set_options", 00:18:51.287 "params": { 00:18:51.287 "process_window_size_kb": 1024, 00:18:51.287 "process_max_bandwidth_mb_sec": 0 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "bdev_iscsi_set_options", 00:18:51.287 "params": { 00:18:51.287 "timeout_sec": 30 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "bdev_nvme_set_options", 00:18:51.287 "params": { 00:18:51.287 "action_on_timeout": "none", 00:18:51.287 "timeout_us": 0, 00:18:51.287 "timeout_admin_us": 0, 00:18:51.287 "keep_alive_timeout_ms": 10000, 00:18:51.287 "arbitration_burst": 0, 00:18:51.287 "low_priority_weight": 0, 00:18:51.287 "medium_priority_weight": 0, 00:18:51.287 "high_priority_weight": 0, 00:18:51.287 "nvme_adminq_poll_period_us": 10000, 00:18:51.287 "nvme_ioq_poll_period_us": 0, 00:18:51.287 "io_queue_requests": 0, 00:18:51.287 "delay_cmd_submit": true, 00:18:51.287 "transport_retry_count": 4, 00:18:51.287 "bdev_retry_count": 3, 00:18:51.287 "transport_ack_timeout": 0, 00:18:51.287 "ctrlr_loss_timeout_sec": 0, 00:18:51.287 "reconnect_delay_sec": 0, 00:18:51.287 "fast_io_fail_timeout_sec": 0, 00:18:51.287 "disable_auto_failback": false, 00:18:51.287 "generate_uuids": false, 00:18:51.287 "transport_tos": 0, 00:18:51.287 "nvme_error_stat": false, 00:18:51.287 "rdma_srq_size": 0, 00:18:51.287 "io_path_stat": false, 00:18:51.287 "allow_accel_sequence": false, 00:18:51.287 "rdma_max_cq_size": 0, 00:18:51.287 "rdma_cm_event_timeout_ms": 0, 00:18:51.287 "dhchap_digests": [ 00:18:51.287 "sha256", 00:18:51.287 "sha384", 00:18:51.287 "sha512" 00:18:51.287 ], 00:18:51.287 "dhchap_dhgroups": [ 00:18:51.287 "null", 00:18:51.287 "ffdhe2048", 00:18:51.287 "ffdhe3072", 00:18:51.287 "ffdhe4096", 00:18:51.287 "ffdhe6144", 00:18:51.287 "ffdhe8192" 00:18:51.287 ] 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "bdev_nvme_set_hotplug", 00:18:51.287 "params": { 00:18:51.287 "period_us": 100000, 00:18:51.287 "enable": false 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "bdev_malloc_create", 00:18:51.287 "params": { 00:18:51.287 "name": "malloc0", 00:18:51.287 "num_blocks": 8192, 00:18:51.287 "block_size": 4096, 00:18:51.287 "physical_block_size": 4096, 00:18:51.287 "uuid": "abd0d898-f5ac-43d1-a6de-19ecf9c01441", 00:18:51.287 "optimal_io_boundary": 0, 00:18:51.287 "md_size": 0, 00:18:51.287 "dif_type": 0, 00:18:51.287 "dif_is_head_of_md": false, 00:18:51.287 "dif_pi_format": 0 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "method": "bdev_wait_for_examine" 00:18:51.287 } 00:18:51.287 ] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "scsi", 00:18:51.287 "config": null 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "scheduler", 00:18:51.287 "config": [ 00:18:51.287 { 00:18:51.287 "method": "framework_set_scheduler", 00:18:51.287 "params": { 00:18:51.287 "name": "static" 00:18:51.287 } 00:18:51.287 } 00:18:51.287 ] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "vhost_scsi", 00:18:51.287 "config": [] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "vhost_blk", 00:18:51.287 "config": [] 00:18:51.287 }, 00:18:51.287 { 00:18:51.287 "subsystem": "ublk", 00:18:51.287 "config": [ 00:18:51.287 { 00:18:51.287 "method": "ublk_create_target", 00:18:51.287 "params": { 00:18:51.287 "cpumask": "1" 00:18:51.287 } 00:18:51.287 }, 00:18:51.287 { 00:18:51.288 "method": "ublk_start_disk", 00:18:51.288 "params": { 00:18:51.288 "bdev_name": "malloc0", 00:18:51.288 "ublk_id": 0, 00:18:51.288 "num_queues": 1, 00:18:51.288 "queue_depth": 128 00:18:51.288 } 00:18:51.288 } 00:18:51.288 ] 00:18:51.288 }, 00:18:51.288 { 00:18:51.288 "subsystem": "nbd", 00:18:51.288 "config": [] 00:18:51.288 }, 00:18:51.288 { 00:18:51.288 "subsystem": "nvmf", 00:18:51.288 "config": [ 00:18:51.288 { 00:18:51.288 "method": "nvmf_set_config", 00:18:51.288 "params": { 00:18:51.288 "discovery_filter": "match_any", 00:18:51.288 "admin_cmd_passthru": { 00:18:51.288 "identify_ctrlr": false 00:18:51.288 }, 00:18:51.288 "dhchap_digests": [ 00:18:51.288 "sha256", 00:18:51.288 "sha384", 00:18:51.288 "sha512" 00:18:51.288 ], 00:18:51.288 "dhchap_dhgroups": [ 00:18:51.288 "null", 00:18:51.288 "ffdhe2048", 00:18:51.288 "ffdhe3072", 00:18:51.288 "ffdhe4096", 00:18:51.288 "ffdhe6144", 00:18:51.288 "ffdhe8192" 00:18:51.288 ] 00:18:51.288 } 00:18:51.288 }, 00:18:51.288 { 00:18:51.288 "method": "nvmf_set_max_subsystems", 00:18:51.288 "params": { 00:18:51.288 "max_subsystems": 1024 00:18:51.288 } 00:18:51.288 }, 00:18:51.288 { 00:18:51.288 "method": "nvmf_set_crdt", 00:18:51.288 "params": { 00:18:51.288 "crdt1": 0, 00:18:51.288 "crdt2": 0, 00:18:51.288 "crdt3": 0 00:18:51.288 } 00:18:51.288 } 00:18:51.288 ] 00:18:51.288 }, 00:18:51.288 { 00:18:51.288 "subsystem": "iscsi", 00:18:51.288 "config": [ 00:18:51.288 { 00:18:51.288 "method": "iscsi_set_options", 00:18:51.288 "params": { 00:18:51.288 "node_base": "iqn.2016-06.io.spdk", 00:18:51.288 "max_sessions": 128, 00:18:51.288 "max_connections_per_session": 2, 00:18:51.288 "max_queue_depth": 64, 00:18:51.288 "default_time2wait": 2, 00:18:51.288 "default_time2retain": 20, 00:18:51.288 "first_burst_length": 8192, 00:18:51.288 "immediate_data": true, 00:18:51.288 "allow_duplicated_isid": false, 00:18:51.288 "error_recovery_level": 0, 00:18:51.288 "nop_timeout": 60, 00:18:51.288 "nop_in_interval": 30, 00:18:51.288 "disable_chap": false, 00:18:51.288 "require_chap": false, 00:18:51.288 "mutual_chap": false, 00:18:51.288 "chap_group": 0, 00:18:51.288 "max_large_datain_per_connection": 64, 00:18:51.288 "max_r2t_per_connection": 4, 00:18:51.288 "pdu_pool_size": 36864, 00:18:51.288 "immediate_data_pool_size": 16384, 00:18:51.288 "data_out_pool_size": 2048 00:18:51.288 } 00:18:51.288 } 00:18:51.288 ] 00:18:51.288 } 00:18:51.288 ] 00:18:51.288 }' 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75443 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75443 ']' 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75443 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75443 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.288 killing process with pid 75443 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75443' 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75443 00:18:51.288 19:03:22 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75443 00:18:52.665 [2024-11-26 19:03:23.797646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:52.665 [2024-11-26 19:03:23.839259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:52.665 [2024-11-26 19:03:23.839442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:52.665 [2024-11-26 19:03:23.847244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:52.665 [2024-11-26 19:03:23.847320] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:52.665 [2024-11-26 19:03:23.847355] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:52.665 [2024-11-26 19:03:23.847393] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:52.665 [2024-11-26 19:03:23.847578] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:55.228 19:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75515 00:18:55.228 19:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75515 00:18:55.228 19:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:55.228 19:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75515 ']' 00:18:55.228 19:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:55.228 "subsystems": [ 00:18:55.228 { 00:18:55.228 "subsystem": "fsdev", 00:18:55.228 "config": [ 00:18:55.228 { 00:18:55.228 "method": "fsdev_set_opts", 00:18:55.228 "params": { 00:18:55.228 "fsdev_io_pool_size": 65535, 00:18:55.228 "fsdev_io_cache_size": 256 00:18:55.228 } 00:18:55.228 } 00:18:55.228 ] 00:18:55.228 }, 00:18:55.228 { 00:18:55.228 "subsystem": "keyring", 00:18:55.228 "config": [] 00:18:55.228 }, 00:18:55.228 { 00:18:55.228 "subsystem": "iobuf", 00:18:55.228 "config": [ 00:18:55.228 { 00:18:55.228 "method": "iobuf_set_options", 00:18:55.228 "params": { 00:18:55.228 "small_pool_count": 8192, 00:18:55.228 "large_pool_count": 1024, 00:18:55.228 "small_bufsize": 8192, 00:18:55.228 "large_bufsize": 135168, 00:18:55.228 "enable_numa": false 00:18:55.228 } 00:18:55.228 } 00:18:55.228 ] 00:18:55.228 }, 00:18:55.228 { 00:18:55.228 "subsystem": "sock", 00:18:55.228 "config": [ 00:18:55.228 { 00:18:55.228 "method": "sock_set_default_impl", 00:18:55.228 "params": { 00:18:55.228 "impl_name": "posix" 00:18:55.228 } 00:18:55.228 }, 00:18:55.228 { 00:18:55.228 "method": "sock_impl_set_options", 00:18:55.228 "params": { 00:18:55.228 "impl_name": "ssl", 00:18:55.228 "recv_buf_size": 4096, 00:18:55.228 "send_buf_size": 4096, 00:18:55.228 "enable_recv_pipe": true, 00:18:55.228 "enable_quickack": false, 00:18:55.228 "enable_placement_id": 0, 00:18:55.228 "enable_zerocopy_send_server": true, 00:18:55.228 "enable_zerocopy_send_client": false, 00:18:55.228 "zerocopy_threshold": 0, 00:18:55.228 "tls_version": 0, 00:18:55.229 "enable_ktls": false 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "sock_impl_set_options", 00:18:55.229 "params": { 00:18:55.229 "impl_name": "posix", 00:18:55.229 "recv_buf_size": 2097152, 00:18:55.229 "send_buf_size": 2097152, 00:18:55.229 "enable_recv_pipe": true, 00:18:55.229 "enable_quickack": false, 00:18:55.229 "enable_placement_id": 0, 00:18:55.229 "enable_zerocopy_send_server": true, 00:18:55.229 "enable_zerocopy_send_client": false, 00:18:55.229 "zerocopy_threshold": 0, 00:18:55.229 "tls_version": 0, 00:18:55.229 "enable_ktls": false 00:18:55.229 } 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "vmd", 00:18:55.229 "config": [] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "accel", 00:18:55.229 "config": [ 00:18:55.229 { 00:18:55.229 "method": "accel_set_options", 00:18:55.229 "params": { 00:18:55.229 "small_cache_size": 128, 00:18:55.229 "large_cache_size": 16, 00:18:55.229 "task_count": 2048, 00:18:55.229 "sequence_count": 2048, 00:18:55.229 "buf_count": 2048 00:18:55.229 } 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "bdev", 00:18:55.229 "config": [ 00:18:55.229 { 00:18:55.229 "method": "bdev_set_options", 00:18:55.229 "params": { 00:18:55.229 "bdev_io_pool_size": 65535, 00:18:55.229 "bdev_io_cache_size": 256, 00:18:55.229 "bdev_auto_examine": true, 00:18:55.229 "iobuf_small_cache_size": 128, 00:18:55.229 "iobuf_large_cache_size": 16 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "bdev_raid_set_options", 00:18:55.229 "params": { 00:18:55.229 "process_window_size_kb": 1024, 00:18:55.229 "process_max_bandwidth_mb_sec": 0 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "bdev_iscsi_set_options", 00:18:55.229 "params": { 00:18:55.229 "timeout_sec": 30 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "bdev_nvme_set_options", 00:18:55.229 "params": { 00:18:55.229 "action_on_timeout": "none", 00:18:55.229 "timeout_us": 0, 00:18:55.229 "timeout_admin_us": 0, 00:18:55.229 "keep_alive_timeout_ms": 10000, 00:18:55.229 "arbitration_burst": 0, 00:18:55.229 "low_priority_weight": 0, 00:18:55.229 "medium_priority_weight": 0, 00:18:55.229 "high_priority_weight": 0, 00:18:55.229 "nvme_adminq_poll_period_us": 10000, 00:18:55.229 "nvme_ioq_poll_period_us": 0, 00:18:55.229 "io_queue_requests": 0, 00:18:55.229 "delay_cmd_submit": true, 00:18:55.229 "transport_retry_count": 4, 00:18:55.229 "bdev_retry_count": 3, 00:18:55.229 "transport_ack_timeout": 0, 00:18:55.229 "ctrlr_loss_timeout_sec": 0, 00:18:55.229 "reconnect_delay_sec": 0, 00:18:55.229 "fast_io_fail_timeout_sec": 0, 00:18:55.229 "disable_auto_failback": false, 00:18:55.229 "generate_uuids": false, 00:18:55.229 "transport_tos": 0, 00:18:55.229 "nvme_error_stat": false, 00:18:55.229 "rdma_srq_size": 0, 00:18:55.229 "io_path_stat": false, 00:18:55.229 "allow_accel_sequence": false, 00:18:55.229 "rdma_max_cq_size": 0, 00:18:55.229 "rdma_cm_event_timeout_ms": 0, 00:18:55.229 "dhchap_digests": [ 00:18:55.229 "sha256", 00:18:55.229 "sha384", 00:18:55.229 "sha512" 00:18:55.229 ], 00:18:55.229 "dhchap_dhgroups": [ 00:18:55.229 "null", 00:18:55.229 "ffdhe2048", 00:18:55.229 "ffdhe3072", 00:18:55.229 "ffdhe4096", 00:18:55.229 "ffdhe6144", 00:18:55.229 "ffdhe8192" 00:18:55.229 ] 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "bdev_nvme_set_hotplug", 00:18:55.229 "params": { 00:18:55.229 "period_us": 100000, 00:18:55.229 "enable": false 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "bdev_malloc_create", 00:18:55.229 "params": { 00:18:55.229 "name": "malloc0", 00:18:55.229 "num_blocks": 8192, 00:18:55.229 "block_size": 4096, 00:18:55.229 "physical_block_size": 4096, 00:18:55.229 "uuid": "abd0d898-f5ac-43d1-a6de-19ecf9c01441", 00:18:55.229 "optimal_io_boundary": 0, 00:18:55.229 "md_size": 0, 00:18:55.229 "dif_type": 0, 00:18:55.229 "dif_is_head_of_md": false, 00:18:55.229 "dif_pi_format": 0 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "bdev_wait_for_examine" 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "scsi", 00:18:55.229 "config": null 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "scheduler", 00:18:55.229 "config": [ 00:18:55.229 { 00:18:55.229 "method": "framework_set_scheduler", 00:18:55.229 "params": { 00:18:55.229 "name": "static" 00:18:55.229 } 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "vhost_scsi", 00:18:55.229 "config": [] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "vhost_blk", 00:18:55.229 "config": [] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "ublk", 00:18:55.229 "config": [ 00:18:55.229 { 00:18:55.229 "method": "ublk_create_target", 00:18:55.229 "params": { 00:18:55.229 "cpumask": "1" 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "ublk_start_disk", 00:18:55.229 "params": { 00:18:55.229 "bdev_name": "malloc0", 00:18:55.229 "ublk_id": 0, 00:18:55.229 "num_queues": 1, 00:18:55.229 "queue_depth": 128 00:18:55.229 } 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "nbd", 00:18:55.229 "config": [] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "nvmf", 00:18:55.229 "config": [ 00:18:55.229 { 00:18:55.229 "method": "nvmf_set_config", 00:18:55.229 "params": { 00:18:55.229 "discovery_filter": "match_any", 00:18:55.229 "admin_cmd_passthru": { 00:18:55.229 "identify_ctrlr": false 00:18:55.229 }, 00:18:55.229 "dhchap_digests": [ 00:18:55.229 "sha256", 00:18:55.229 "sha384", 00:18:55.229 "sha512" 00:18:55.229 ], 00:18:55.229 "dhchap_dhgroups": [ 00:18:55.229 "null", 00:18:55.229 "ffdhe2048", 00:18:55.229 "ffdhe3072", 00:18:55.229 "ffdhe4096", 00:18:55.229 "ffdhe6144", 00:18:55.229 "ffdhe8192" 00:18:55.229 ] 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "nvmf_set_max_subsystems", 00:18:55.229 "params": { 00:18:55.229 "max_subsystems": 1024 00:18:55.229 } 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "method": "nvmf_set_crdt", 00:18:55.229 "params": { 00:18:55.229 "crdt1": 0, 00:18:55.229 "crdt2": 0, 00:18:55.229 "crdt3": 0 00:18:55.229 } 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }, 00:18:55.229 { 00:18:55.229 "subsystem": "iscsi", 00:18:55.229 "config": [ 00:18:55.229 { 00:18:55.229 "method": "iscsi_set_options", 00:18:55.229 "params": { 00:18:55.229 "node_base": "iqn.2016-06.io.spdk", 00:18:55.229 "max_sessions": 128, 00:18:55.229 "max_connections_per_session": 2, 00:18:55.229 "max_queue_depth": 64, 00:18:55.229 "default_time2wait": 2, 00:18:55.229 "default_time2retain": 20, 00:18:55.229 "first_burst_length": 8192, 00:18:55.229 "immediate_data": true, 00:18:55.229 "allow_duplicated_isid": false, 00:18:55.229 "error_recovery_level": 0, 00:18:55.229 "nop_timeout": 60, 00:18:55.229 "nop_in_interval": 30, 00:18:55.229 "disable_chap": false, 00:18:55.229 "require_chap": false, 00:18:55.229 "mutual_chap": false, 00:18:55.229 "chap_group": 0, 00:18:55.229 "max_large_datain_per_connection": 64, 00:18:55.229 "max_r2t_per_connection": 4, 00:18:55.229 "pdu_pool_size": 36864, 00:18:55.229 "immediate_data_pool_size": 16384, 00:18:55.229 "data_out_pool_size": 2048 00:18:55.229 } 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 } 00:18:55.229 ] 00:18:55.229 }' 00:18:55.229 19:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.229 19:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.229 19:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.229 19:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.229 19:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:55.229 [2024-11-26 19:03:26.220440] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:18:55.229 [2024-11-26 19:03:26.220599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:18:55.517 [2024-11-26 19:03:26.416685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.517 [2024-11-26 19:03:26.564326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.892 [2024-11-26 19:03:27.680197] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:56.892 [2024-11-26 19:03:27.681293] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:56.892 [2024-11-26 19:03:27.688414] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:56.892 [2024-11-26 19:03:27.688568] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:56.892 [2024-11-26 19:03:27.688599] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:56.893 [2024-11-26 19:03:27.688616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:56.893 [2024-11-26 19:03:27.697290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:56.893 [2024-11-26 19:03:27.697327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:56.893 [2024-11-26 19:03:27.704231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:56.893 [2024-11-26 19:03:27.704365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:56.893 [2024-11-26 19:03:27.721214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75515 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75515 ']' 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75515 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75515 00:18:56.893 killing process with pid 75515 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75515' 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75515 00:18:56.893 19:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75515 00:18:58.268 [2024-11-26 19:03:29.342440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:58.268 [2024-11-26 19:03:29.376228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:58.268 [2024-11-26 19:03:29.376419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:58.268 [2024-11-26 19:03:29.388239] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:58.268 [2024-11-26 19:03:29.388313] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:58.268 [2024-11-26 19:03:29.388327] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:58.268 [2024-11-26 19:03:29.388365] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:58.268 [2024-11-26 19:03:29.388616] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:00.168 19:03:31 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:00.168 00:19:00.168 real 0m10.512s 00:19:00.168 user 0m8.133s 00:19:00.168 sys 0m3.453s 00:19:00.168 19:03:31 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.168 19:03:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:00.168 ************************************ 00:19:00.168 END TEST test_save_ublk_config 00:19:00.168 ************************************ 00:19:00.168 19:03:31 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75601 00:19:00.168 19:03:31 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:00.168 19:03:31 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.168 19:03:31 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75601 00:19:00.168 19:03:31 ublk -- common/autotest_common.sh@835 -- # '[' -z 75601 ']' 00:19:00.168 19:03:31 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.168 19:03:31 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.168 19:03:31 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.168 19:03:31 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.168 19:03:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.168 [2024-11-26 19:03:31.349216] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:19:00.168 [2024-11-26 19:03:31.349425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75601 ] 00:19:00.426 [2024-11-26 19:03:31.535779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:00.683 [2024-11-26 19:03:31.641021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.683 [2024-11-26 19:03:31.641038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.248 19:03:32 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.248 19:03:32 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:01.248 19:03:32 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:01.248 19:03:32 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:01.248 19:03:32 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.248 19:03:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.248 ************************************ 00:19:01.248 START TEST test_create_ublk 00:19:01.248 ************************************ 00:19:01.248 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:01.248 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:01.248 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.248 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.248 [2024-11-26 19:03:32.426205] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:01.248 [2024-11-26 19:03:32.428685] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:01.248 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.248 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:01.248 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:01.248 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.248 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.505 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.506 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:01.763 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.763 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.763 [2024-11-26 19:03:32.730421] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:01.763 [2024-11-26 19:03:32.731008] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:01.763 [2024-11-26 19:03:32.731041] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:01.763 [2024-11-26 19:03:32.731053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:01.763 [2024-11-26 19:03:32.738257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:01.763 [2024-11-26 19:03:32.738294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:01.763 [2024-11-26 19:03:32.746236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:01.763 [2024-11-26 19:03:32.747003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:01.763 [2024-11-26 19:03:32.769269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:01.763 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:01.763 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.763 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:01.763 19:03:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:01.763 { 00:19:01.763 "ublk_device": "/dev/ublkb0", 00:19:01.763 "id": 0, 00:19:01.763 "queue_depth": 512, 00:19:01.763 "num_queues": 4, 00:19:01.763 "bdev_name": "Malloc0" 00:19:01.763 } 00:19:01.763 ]' 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:01.763 19:03:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:02.020 19:03:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:02.020 19:03:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:02.020 19:03:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:02.020 19:03:33 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:02.020 19:03:33 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:02.020 fio: verification read phase will never start because write phase uses all of runtime 00:19:02.020 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:02.020 fio-3.35 00:19:02.020 Starting 1 process 00:19:12.101 00:19:12.101 fio_test: (groupid=0, jobs=1): err= 0: pid=75653: Tue Nov 26 19:03:43 2024 00:19:12.101 write: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(444MiB/10001msec); 0 zone resets 00:19:12.101 clat (usec): min=54, max=4130, avg=86.13, stdev=129.32 00:19:12.101 lat (usec): min=55, max=4131, avg=87.09, stdev=129.37 00:19:12.101 clat percentiles (usec): 00:19:12.101 | 1.00th=[ 62], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:19:12.101 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 78], 00:19:12.101 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 92], 95.00th=[ 101], 00:19:12.101 | 99.00th=[ 128], 99.50th=[ 159], 99.90th=[ 2638], 99.95th=[ 3163], 00:19:12.101 | 99.99th=[ 3752] 00:19:12.101 bw ( KiB/s): min=40160, max=49684, per=99.88%, avg=45444.84, stdev=2047.22, samples=19 00:19:12.101 iops : min=10040, max=12421, avg=11361.21, stdev=511.81, samples=19 00:19:12.101 lat (usec) : 100=94.77%, 250=4.82%, 500=0.07%, 750=0.02%, 1000=0.02% 00:19:12.101 lat (msec) : 2=0.12%, 4=0.17%, 10=0.01% 00:19:12.101 cpu : usr=3.58%, sys=9.84%, ctx=113757, majf=0, minf=794 00:19:12.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.101 issued rwts: total=0,113759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.101 00:19:12.101 Run status group 0 (all jobs): 00:19:12.101 WRITE: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=444MiB (466MB), run=10001-10001msec 00:19:12.101 00:19:12.101 Disk stats (read/write): 00:19:12.101 ublkb0: ios=0/112529, merge=0/0, ticks=0/8616, in_queue=8617, util=99.09% 00:19:12.101 19:03:43 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:12.101 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.101 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.360 [2024-11-26 19:03:43.318248] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:12.360 [2024-11-26 19:03:43.355289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:12.360 [2024-11-26 19:03:43.356199] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:12.360 [2024-11-26 19:03:43.364276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:12.360 [2024-11-26 19:03:43.364620] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:12.360 [2024-11-26 19:03:43.364646] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.360 19:03:43 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.360 [2024-11-26 19:03:43.379306] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:12.360 request: 00:19:12.360 { 00:19:12.360 "ublk_id": 0, 00:19:12.360 "method": "ublk_stop_disk", 00:19:12.360 "req_id": 1 00:19:12.360 } 00:19:12.360 Got JSON-RPC error response 00:19:12.360 response: 00:19:12.360 { 00:19:12.360 "code": -19, 00:19:12.360 "message": "No such device" 00:19:12.360 } 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.360 19:03:43 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.360 [2024-11-26 19:03:43.394318] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:12.360 [2024-11-26 19:03:43.402200] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:12.360 [2024-11-26 19:03:43.402260] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.360 19:03:43 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.360 19:03:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.927 19:03:44 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.927 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:12.927 19:03:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:13.186 19:03:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:13.186 00:19:13.186 real 0m11.736s 00:19:13.186 user 0m0.821s 00:19:13.186 sys 0m1.093s 00:19:13.186 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.186 ************************************ 00:19:13.186 END TEST test_create_ublk 00:19:13.186 ************************************ 00:19:13.186 19:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.186 19:03:44 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:13.186 19:03:44 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:13.186 19:03:44 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.186 19:03:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.186 ************************************ 00:19:13.186 START TEST test_create_multi_ublk 00:19:13.186 ************************************ 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.186 [2024-11-26 19:03:44.215206] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:13.186 [2024-11-26 19:03:44.217581] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.186 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.445 [2024-11-26 19:03:44.503374] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:13.445 [2024-11-26 19:03:44.503888] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:13.445 [2024-11-26 19:03:44.503910] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:13.445 [2024-11-26 19:03:44.503927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:13.445 [2024-11-26 19:03:44.515199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:13.445 [2024-11-26 19:03:44.515239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:13.445 [2024-11-26 19:03:44.526201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:13.445 [2024-11-26 19:03:44.526992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:13.445 [2024-11-26 19:03:44.541312] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.445 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.703 [2024-11-26 19:03:44.797372] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:13.703 [2024-11-26 19:03:44.797859] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:13.703 [2024-11-26 19:03:44.797884] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:13.703 [2024-11-26 19:03:44.797896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:13.703 [2024-11-26 19:03:44.806434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:13.703 [2024-11-26 19:03:44.806463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:13.703 [2024-11-26 19:03:44.813207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:13.703 [2024-11-26 19:03:44.813939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:13.703 [2024-11-26 19:03:44.819402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.703 19:03:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:13.961 [2024-11-26 19:03:45.068361] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:13.961 [2024-11-26 19:03:45.068856] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:13.961 [2024-11-26 19:03:45.068885] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:13.961 [2024-11-26 19:03:45.068899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:13.961 [2024-11-26 19:03:45.076223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:13.961 [2024-11-26 19:03:45.076259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:13.961 [2024-11-26 19:03:45.084226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:13.961 [2024-11-26 19:03:45.084970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:13.961 [2024-11-26 19:03:45.090423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:13.961 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.962 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:13.962 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.962 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.220 [2024-11-26 19:03:45.351464] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:14.220 [2024-11-26 19:03:45.352006] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:14.220 [2024-11-26 19:03:45.352034] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:14.220 [2024-11-26 19:03:45.352045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:14.220 [2024-11-26 19:03:45.359279] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:14.220 [2024-11-26 19:03:45.359320] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:14.220 [2024-11-26 19:03:45.367310] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:14.220 [2024-11-26 19:03:45.368115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:14.220 [2024-11-26 19:03:45.376292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:14.220 { 00:19:14.220 "ublk_device": "/dev/ublkb0", 00:19:14.220 "id": 0, 00:19:14.220 "queue_depth": 512, 00:19:14.220 "num_queues": 4, 00:19:14.220 "bdev_name": "Malloc0" 00:19:14.220 }, 00:19:14.220 { 00:19:14.220 "ublk_device": "/dev/ublkb1", 00:19:14.220 "id": 1, 00:19:14.220 "queue_depth": 512, 00:19:14.220 "num_queues": 4, 00:19:14.220 "bdev_name": "Malloc1" 00:19:14.220 }, 00:19:14.220 { 00:19:14.220 "ublk_device": "/dev/ublkb2", 00:19:14.220 "id": 2, 00:19:14.220 "queue_depth": 512, 00:19:14.220 "num_queues": 4, 00:19:14.220 "bdev_name": "Malloc2" 00:19:14.220 }, 00:19:14.220 { 00:19:14.220 "ublk_device": "/dev/ublkb3", 00:19:14.220 "id": 3, 00:19:14.220 "queue_depth": 512, 00:19:14.220 "num_queues": 4, 00:19:14.220 "bdev_name": "Malloc3" 00:19:14.220 } 00:19:14.220 ]' 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.220 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:14.479 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.480 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.738 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:14.995 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:14.996 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:14.996 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:14.996 19:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:14.996 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.254 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.254 [2024-11-26 19:03:46.420438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:15.254 [2024-11-26 19:03:46.451693] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:15.254 [2024-11-26 19:03:46.452907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:15.254 [2024-11-26 19:03:46.459219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:15.254 [2024-11-26 19:03:46.459582] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:15.254 [2024-11-26 19:03:46.459610] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.514 [2024-11-26 19:03:46.474307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:15.514 [2024-11-26 19:03:46.504254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:15.514 [2024-11-26 19:03:46.505371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:15.514 [2024-11-26 19:03:46.512223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:15.514 [2024-11-26 19:03:46.512555] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:15.514 [2024-11-26 19:03:46.512581] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.514 [2024-11-26 19:03:46.527345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:15.514 [2024-11-26 19:03:46.559278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:15.514 [2024-11-26 19:03:46.560291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:15.514 [2024-11-26 19:03:46.566212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:15.514 [2024-11-26 19:03:46.566579] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:15.514 [2024-11-26 19:03:46.566607] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.514 [2024-11-26 19:03:46.573334] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:15.514 [2024-11-26 19:03:46.608289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:15.514 [2024-11-26 19:03:46.609210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:15.514 [2024-11-26 19:03:46.617291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:15.514 [2024-11-26 19:03:46.617615] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:15.514 [2024-11-26 19:03:46.617637] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.514 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:15.773 [2024-11-26 19:03:46.894326] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:15.773 [2024-11-26 19:03:46.902193] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:15.773 [2024-11-26 19:03:46.902248] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:15.773 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:15.773 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.773 19:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:15.773 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.773 19:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.340 19:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.340 19:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.340 19:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:16.340 19:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.340 19:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.908 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.908 19:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.908 19:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:16.908 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.908 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.473 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.473 19:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.473 19:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:17.473 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.473 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:17.732 00:19:17.732 real 0m4.624s 00:19:17.732 user 0m1.300s 00:19:17.732 sys 0m0.176s 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.732 19:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.733 ************************************ 00:19:17.733 END TEST test_create_multi_ublk 00:19:17.733 ************************************ 00:19:17.733 19:03:48 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:17.733 19:03:48 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:17.733 19:03:48 ublk -- ublk/ublk.sh@130 -- # killprocess 75601 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@954 -- # '[' -z 75601 ']' 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@958 -- # kill -0 75601 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@959 -- # uname 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75601 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.733 killing process with pid 75601 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75601' 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@973 -- # kill 75601 00:19:17.733 19:03:48 ublk -- common/autotest_common.sh@978 -- # wait 75601 00:19:18.665 [2024-11-26 19:03:49.876206] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:18.665 [2024-11-26 19:03:49.876274] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:20.054 00:19:20.054 real 0m30.574s 00:19:20.054 user 0m43.800s 00:19:20.054 sys 0m10.682s 00:19:20.054 19:03:51 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.054 ************************************ 00:19:20.054 END TEST ublk 00:19:20.054 19:03:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 ************************************ 00:19:20.054 19:03:51 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:20.054 19:03:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.054 19:03:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.054 19:03:51 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 ************************************ 00:19:20.054 START TEST ublk_recovery 00:19:20.054 ************************************ 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:20.054 * Looking for test storage... 00:19:20.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.054 19:03:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.054 --rc genhtml_branch_coverage=1 00:19:20.054 --rc genhtml_function_coverage=1 00:19:20.054 --rc genhtml_legend=1 00:19:20.054 --rc geninfo_all_blocks=1 00:19:20.054 --rc geninfo_unexecuted_blocks=1 00:19:20.054 00:19:20.054 ' 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.054 --rc genhtml_branch_coverage=1 00:19:20.054 --rc genhtml_function_coverage=1 00:19:20.054 --rc genhtml_legend=1 00:19:20.054 --rc geninfo_all_blocks=1 00:19:20.054 --rc geninfo_unexecuted_blocks=1 00:19:20.054 00:19:20.054 ' 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.054 --rc genhtml_branch_coverage=1 00:19:20.054 --rc genhtml_function_coverage=1 00:19:20.054 --rc genhtml_legend=1 00:19:20.054 --rc geninfo_all_blocks=1 00:19:20.054 --rc geninfo_unexecuted_blocks=1 00:19:20.054 00:19:20.054 ' 00:19:20.054 19:03:51 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.055 --rc genhtml_branch_coverage=1 00:19:20.055 --rc genhtml_function_coverage=1 00:19:20.055 --rc genhtml_legend=1 00:19:20.055 --rc geninfo_all_blocks=1 00:19:20.055 --rc geninfo_unexecuted_blocks=1 00:19:20.055 00:19:20.055 ' 00:19:20.055 19:03:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:20.055 19:03:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:20.055 19:03:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:20.055 19:03:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76017 00:19:20.055 19:03:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:20.055 19:03:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.055 19:03:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76017 00:19:20.055 19:03:51 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76017 ']' 00:19:20.055 19:03:51 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.055 19:03:51 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.055 19:03:51 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.055 19:03:51 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.055 19:03:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.313 [2024-11-26 19:03:51.328418] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:19:20.313 [2024-11-26 19:03:51.328606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76017 ] 00:19:20.570 [2024-11-26 19:03:51.536794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.570 [2024-11-26 19:03:51.658047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.570 [2024-11-26 19:03:51.658054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:21.504 19:03:52 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.504 [2024-11-26 19:03:52.437201] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:21.504 [2024-11-26 19:03:52.439631] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.504 19:03:52 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.504 malloc0 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.504 19:03:52 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.504 [2024-11-26 19:03:52.579389] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:21.504 [2024-11-26 19:03:52.579529] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:21.504 [2024-11-26 19:03:52.579553] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:21.504 [2024-11-26 19:03:52.579566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:21.504 [2024-11-26 19:03:52.587236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:21.504 [2024-11-26 19:03:52.587264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:21.504 [2024-11-26 19:03:52.595258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:21.504 [2024-11-26 19:03:52.595493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:21.504 [2024-11-26 19:03:52.606226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:21.504 1 00:19:21.504 19:03:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.504 19:03:52 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:22.438 19:03:53 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76060 00:19:22.438 19:03:53 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:22.438 19:03:53 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:22.696 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.696 fio-3.35 00:19:22.696 Starting 1 process 00:19:27.964 19:03:58 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76017 00:19:27.964 19:03:58 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:33.226 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76017 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:33.226 19:04:03 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76166 00:19:33.226 19:04:03 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:33.226 19:04:03 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.226 19:04:03 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76166 00:19:33.226 19:04:03 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76166 ']' 00:19:33.226 19:04:03 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.226 19:04:03 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.226 19:04:03 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.226 19:04:03 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.226 19:04:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.226 [2024-11-26 19:04:03.752918] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:19:33.226 [2024-11-26 19:04:03.753095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76166 ] 00:19:33.226 [2024-11-26 19:04:03.940386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:33.226 [2024-11-26 19:04:04.049840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.226 [2024-11-26 19:04:04.049844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:33.792 19:04:04 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.792 [2024-11-26 19:04:04.865203] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:33.792 [2024-11-26 19:04:04.867676] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.792 19:04:04 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.792 malloc0 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.792 19:04:04 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.792 19:04:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.792 [2024-11-26 19:04:04.999395] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:33.792 [2024-11-26 19:04:04.999461] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:33.792 [2024-11-26 19:04:04.999479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:33.792 1 00:19:33.792 19:04:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.792 19:04:05 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76060 00:19:34.051 [2024-11-26 19:04:05.009203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:34.051 [2024-11-26 19:04:05.009238] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:34.988 [2024-11-26 19:04:06.010219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:34.988 [2024-11-26 19:04:06.019233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:34.988 [2024-11-26 19:04:06.019263] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:35.933 [2024-11-26 19:04:07.019307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:35.933 [2024-11-26 19:04:07.020240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:35.933 [2024-11-26 19:04:07.020271] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:36.867 [2024-11-26 19:04:08.020306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:36.867 [2024-11-26 19:04:08.024228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:36.867 [2024-11-26 19:04:08.024265] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:36.867 [2024-11-26 19:04:08.024282] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:36.867 [2024-11-26 19:04:08.024408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:58.787 [2024-11-26 19:04:29.091232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:58.787 [2024-11-26 19:04:29.098279] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:58.787 [2024-11-26 19:04:29.106751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:58.787 [2024-11-26 19:04:29.106806] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:25.432 00:20:25.432 fio_test: (groupid=0, jobs=1): err= 0: pid=76063: Tue Nov 26 19:04:53 2024 00:20:25.432 read: IOPS=9528, BW=37.2MiB/s (39.0MB/s)(2233MiB/60005msec) 00:20:25.432 slat (nsec): min=1848, max=319406, avg=6661.62, stdev=2776.26 00:20:25.432 clat (usec): min=1071, max=30496k, avg=6977.60, stdev=337370.06 00:20:25.433 lat (usec): min=1080, max=30496k, avg=6984.27, stdev=337370.06 00:20:25.433 clat percentiles (msec): 00:20:25.433 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:20:25.433 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:20:25.433 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:20:25.433 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 10], 99.95th=[ 13], 00:20:25.433 | 99.99th=[17113] 00:20:25.433 bw ( KiB/s): min=20152, max=82992, per=100.00%, avg=76289.59, stdev=11258.68, samples=59 00:20:25.433 iops : min= 5038, max=20748, avg=19072.37, stdev=2814.66, samples=59 00:20:25.433 write: IOPS=9513, BW=37.2MiB/s (39.0MB/s)(2230MiB/60005msec); 0 zone resets 00:20:25.433 slat (nsec): min=1980, max=379089, avg=6928.92, stdev=2815.86 00:20:25.433 clat (usec): min=742, max=30496k, avg=6450.62, stdev=307337.59 00:20:25.433 lat (usec): min=770, max=30496k, avg=6457.55, stdev=307337.58 00:20:25.433 clat percentiles (msec): 00:20:25.433 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:20:25.433 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:20:25.433 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:20:25.433 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 10], 99.95th=[ 13], 00:20:25.433 | 99.99th=[17113] 00:20:25.433 bw ( KiB/s): min=21152, max=82816, per=100.00%, avg=76174.47, stdev=11216.99, samples=59 00:20:25.433 iops : min= 5288, max=20704, avg=19043.59, stdev=2804.24, samples=59 00:20:25.433 lat (usec) : 750=0.01% 00:20:25.433 lat (msec) : 2=0.07%, 4=91.55%, 10=8.31%, 20=0.07%, >=2000=0.01% 00:20:25.433 cpu : usr=5.67%, sys=12.46%, ctx=37787, majf=0, minf=13 00:20:25.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:25.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.433 issued rwts: total=571728,570840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.433 00:20:25.433 Run status group 0 (all jobs): 00:20:25.433 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=2233MiB (2342MB), run=60005-60005msec 00:20:25.433 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=2230MiB (2338MB), run=60005-60005msec 00:20:25.433 00:20:25.433 Disk stats (read/write): 00:20:25.433 ublkb1: ios=569311/568471, merge=0/0, ticks=3928558/3553491, in_queue=7482050, util=99.94% 00:20:25.433 19:04:53 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 [2024-11-26 19:04:53.906801] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:25.433 [2024-11-26 19:04:53.943251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:25.433 [2024-11-26 19:04:53.943511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:25.433 [2024-11-26 19:04:53.951238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:25.433 [2024-11-26 19:04:53.951416] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:25.433 [2024-11-26 19:04:53.951443] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.433 19:04:53 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.433 [2024-11-26 19:04:53.967374] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.433 [2024-11-26 19:04:53.975191] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:25.433 [2024-11-26 19:04:53.975258] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.433 19:04:53 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:25.433 19:04:53 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:25.433 19:04:53 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76166 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76166 ']' 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76166 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.433 19:04:53 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76166 00:20:25.433 19:04:54 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.433 19:04:54 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.433 killing process with pid 76166 00:20:25.433 19:04:54 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76166' 00:20:25.433 19:04:54 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76166 00:20:25.433 19:04:54 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76166 00:20:25.433 [2024-11-26 19:04:55.452658] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.433 [2024-11-26 19:04:55.452733] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:25.692 00:20:25.692 real 1m5.664s 00:20:25.692 user 1m50.445s 00:20:25.692 sys 0m21.108s 00:20:25.692 19:04:56 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.692 ************************************ 00:20:25.692 END TEST ublk_recovery 00:20:25.692 19:04:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.692 ************************************ 00:20:25.692 19:04:56 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:25.692 19:04:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:25.692 19:04:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.692 19:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.692 19:04:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:25.692 19:04:56 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:25.692 19:04:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.692 19:04:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.692 19:04:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.692 ************************************ 00:20:25.692 START TEST ftl 00:20:25.692 ************************************ 00:20:25.692 19:04:56 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:25.692 * Looking for test storage... 00:20:25.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.692 19:04:56 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.692 19:04:56 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.692 19:04:56 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.951 19:04:56 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.951 19:04:56 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.951 19:04:56 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.951 19:04:56 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.951 19:04:56 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.951 19:04:56 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.951 19:04:56 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:25.951 19:04:56 ftl -- scripts/common.sh@345 -- # : 1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.951 19:04:56 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.951 19:04:56 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@353 -- # local d=1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.951 19:04:56 ftl -- scripts/common.sh@355 -- # echo 1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.951 19:04:56 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@353 -- # local d=2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.951 19:04:56 ftl -- scripts/common.sh@355 -- # echo 2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.951 19:04:56 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.951 19:04:56 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.951 19:04:56 ftl -- scripts/common.sh@368 -- # return 0 00:20:25.951 19:04:56 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.951 19:04:56 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.951 --rc genhtml_branch_coverage=1 00:20:25.951 --rc genhtml_function_coverage=1 00:20:25.951 --rc genhtml_legend=1 00:20:25.951 --rc geninfo_all_blocks=1 00:20:25.951 --rc geninfo_unexecuted_blocks=1 00:20:25.951 00:20:25.951 ' 00:20:25.951 19:04:56 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.951 --rc genhtml_branch_coverage=1 00:20:25.951 --rc genhtml_function_coverage=1 00:20:25.951 --rc genhtml_legend=1 00:20:25.951 --rc geninfo_all_blocks=1 00:20:25.951 --rc geninfo_unexecuted_blocks=1 00:20:25.951 00:20:25.951 ' 00:20:25.951 19:04:56 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.951 --rc genhtml_branch_coverage=1 00:20:25.951 --rc genhtml_function_coverage=1 00:20:25.951 --rc genhtml_legend=1 00:20:25.951 --rc geninfo_all_blocks=1 00:20:25.951 --rc geninfo_unexecuted_blocks=1 00:20:25.951 00:20:25.951 ' 00:20:25.951 19:04:56 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.951 --rc genhtml_branch_coverage=1 00:20:25.951 --rc genhtml_function_coverage=1 00:20:25.951 --rc genhtml_legend=1 00:20:25.951 --rc geninfo_all_blocks=1 00:20:25.951 --rc geninfo_unexecuted_blocks=1 00:20:25.951 00:20:25.951 ' 00:20:25.951 19:04:56 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:25.951 19:04:56 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:25.951 19:04:56 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.951 19:04:56 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.951 19:04:56 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:25.951 19:04:56 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:25.951 19:04:56 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.951 19:04:56 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:25.951 19:04:56 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:25.951 19:04:56 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.951 19:04:56 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.951 19:04:56 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:25.952 19:04:56 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:25.952 19:04:56 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:25.952 19:04:56 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:25.952 19:04:56 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:25.952 19:04:56 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:25.952 19:04:56 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.952 19:04:56 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.952 19:04:56 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:25.952 19:04:56 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:25.952 19:04:56 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:25.952 19:04:56 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:25.952 19:04:56 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:25.952 19:04:56 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:25.952 19:04:56 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:25.952 19:04:56 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:25.952 19:04:56 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:25.952 19:04:56 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:25.952 19:04:56 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.952 19:04:56 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:25.952 19:04:56 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:25.952 19:04:56 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:25.952 19:04:56 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:25.952 19:04:56 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:26.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:26.468 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:26.468 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:26.468 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:26.468 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:26.468 19:04:57 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76946 00:20:26.468 19:04:57 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76946 00:20:26.468 19:04:57 ftl -- common/autotest_common.sh@835 -- # '[' -z 76946 ']' 00:20:26.468 19:04:57 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:26.468 19:04:57 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.468 19:04:57 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.468 19:04:57 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.468 19:04:57 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.468 19:04:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:26.468 [2024-11-26 19:04:57.611987] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:20:26.468 [2024-11-26 19:04:57.612188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76946 ] 00:20:26.726 [2024-11-26 19:04:57.798142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.726 [2024-11-26 19:04:57.926344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.662 19:04:58 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.662 19:04:58 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:27.662 19:04:58 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:27.662 19:04:58 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:29.051 19:04:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:29.051 19:04:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:29.308 19:05:00 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:29.308 19:05:00 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:29.308 19:05:00 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@50 -- # break 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:29.565 19:05:00 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:29.823 19:05:01 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:29.823 19:05:01 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:29.823 19:05:01 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:29.823 19:05:01 ftl -- ftl/ftl.sh@63 -- # break 00:20:29.823 19:05:01 ftl -- ftl/ftl.sh@66 -- # killprocess 76946 00:20:29.823 19:05:01 ftl -- common/autotest_common.sh@954 -- # '[' -z 76946 ']' 00:20:29.823 19:05:01 ftl -- common/autotest_common.sh@958 -- # kill -0 76946 00:20:29.823 19:05:01 ftl -- common/autotest_common.sh@959 -- # uname 00:20:29.823 19:05:01 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.081 19:05:01 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76946 00:20:30.081 19:05:01 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.081 19:05:01 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.081 killing process with pid 76946 00:20:30.081 19:05:01 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76946' 00:20:30.081 19:05:01 ftl -- common/autotest_common.sh@973 -- # kill 76946 00:20:30.081 19:05:01 ftl -- common/autotest_common.sh@978 -- # wait 76946 00:20:31.982 19:05:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:31.983 19:05:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:31.983 19:05:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:31.983 19:05:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.983 19:05:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:31.983 ************************************ 00:20:31.983 START TEST ftl_fio_basic 00:20:31.983 ************************************ 00:20:31.983 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:32.241 * Looking for test storage... 00:20:32.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.242 --rc genhtml_branch_coverage=1 00:20:32.242 --rc genhtml_function_coverage=1 00:20:32.242 --rc genhtml_legend=1 00:20:32.242 --rc geninfo_all_blocks=1 00:20:32.242 --rc geninfo_unexecuted_blocks=1 00:20:32.242 00:20:32.242 ' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.242 --rc genhtml_branch_coverage=1 00:20:32.242 --rc genhtml_function_coverage=1 00:20:32.242 --rc genhtml_legend=1 00:20:32.242 --rc geninfo_all_blocks=1 00:20:32.242 --rc geninfo_unexecuted_blocks=1 00:20:32.242 00:20:32.242 ' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.242 --rc genhtml_branch_coverage=1 00:20:32.242 --rc genhtml_function_coverage=1 00:20:32.242 --rc genhtml_legend=1 00:20:32.242 --rc geninfo_all_blocks=1 00:20:32.242 --rc geninfo_unexecuted_blocks=1 00:20:32.242 00:20:32.242 ' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:32.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.242 --rc genhtml_branch_coverage=1 00:20:32.242 --rc genhtml_function_coverage=1 00:20:32.242 --rc genhtml_legend=1 00:20:32.242 --rc geninfo_all_blocks=1 00:20:32.242 --rc geninfo_unexecuted_blocks=1 00:20:32.242 00:20:32.242 ' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77094 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77094 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77094 ']' 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.242 19:05:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 [2024-11-26 19:05:03.463102] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:20:32.501 [2024-11-26 19:05:03.463983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77094 ] 00:20:32.501 [2024-11-26 19:05:03.662472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:32.759 [2024-11-26 19:05:03.793202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.759 [2024-11-26 19:05:03.793271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.759 [2024-11-26 19:05:03.793273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:33.693 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:33.952 19:05:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:34.211 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:34.211 { 00:20:34.211 "name": "nvme0n1", 00:20:34.211 "aliases": [ 00:20:34.211 "25212fe7-67da-41e2-b01d-6f4e00be7993" 00:20:34.211 ], 00:20:34.211 "product_name": "NVMe disk", 00:20:34.211 "block_size": 4096, 00:20:34.211 "num_blocks": 1310720, 00:20:34.211 "uuid": "25212fe7-67da-41e2-b01d-6f4e00be7993", 00:20:34.211 "numa_id": -1, 00:20:34.211 "assigned_rate_limits": { 00:20:34.211 "rw_ios_per_sec": 0, 00:20:34.211 "rw_mbytes_per_sec": 0, 00:20:34.211 "r_mbytes_per_sec": 0, 00:20:34.211 "w_mbytes_per_sec": 0 00:20:34.211 }, 00:20:34.211 "claimed": false, 00:20:34.211 "zoned": false, 00:20:34.211 "supported_io_types": { 00:20:34.211 "read": true, 00:20:34.211 "write": true, 00:20:34.211 "unmap": true, 00:20:34.211 "flush": true, 00:20:34.211 "reset": true, 00:20:34.211 "nvme_admin": true, 00:20:34.211 "nvme_io": true, 00:20:34.211 "nvme_io_md": false, 00:20:34.211 "write_zeroes": true, 00:20:34.211 "zcopy": false, 00:20:34.211 "get_zone_info": false, 00:20:34.211 "zone_management": false, 00:20:34.211 "zone_append": false, 00:20:34.211 "compare": true, 00:20:34.211 "compare_and_write": false, 00:20:34.211 "abort": true, 00:20:34.211 "seek_hole": false, 00:20:34.211 "seek_data": false, 00:20:34.211 "copy": true, 00:20:34.211 "nvme_iov_md": false 00:20:34.211 }, 00:20:34.211 "driver_specific": { 00:20:34.211 "nvme": [ 00:20:34.211 { 00:20:34.211 "pci_address": "0000:00:11.0", 00:20:34.211 "trid": { 00:20:34.211 "trtype": "PCIe", 00:20:34.211 "traddr": "0000:00:11.0" 00:20:34.211 }, 00:20:34.211 "ctrlr_data": { 00:20:34.211 "cntlid": 0, 00:20:34.212 "vendor_id": "0x1b36", 00:20:34.212 "model_number": "QEMU NVMe Ctrl", 00:20:34.212 "serial_number": "12341", 00:20:34.212 "firmware_revision": "8.0.0", 00:20:34.212 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:34.212 "oacs": { 00:20:34.212 "security": 0, 00:20:34.212 "format": 1, 00:20:34.212 "firmware": 0, 00:20:34.212 "ns_manage": 1 00:20:34.212 }, 00:20:34.212 "multi_ctrlr": false, 00:20:34.212 "ana_reporting": false 00:20:34.212 }, 00:20:34.212 "vs": { 00:20:34.212 "nvme_version": "1.4" 00:20:34.212 }, 00:20:34.212 "ns_data": { 00:20:34.212 "id": 1, 00:20:34.212 "can_share": false 00:20:34.212 } 00:20:34.212 } 00:20:34.212 ], 00:20:34.212 "mp_policy": "active_passive" 00:20:34.212 } 00:20:34.212 } 00:20:34.212 ]' 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:34.212 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:34.778 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:34.778 19:05:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:35.038 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=6eb0d872-d7df-40fa-9abe-ec79085f7c28 00:20:35.038 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6eb0d872-d7df-40fa-9abe-ec79085f7c28 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:35.296 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:35.555 { 00:20:35.555 "name": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:35.555 "aliases": [ 00:20:35.555 "lvs/nvme0n1p0" 00:20:35.555 ], 00:20:35.555 "product_name": "Logical Volume", 00:20:35.555 "block_size": 4096, 00:20:35.555 "num_blocks": 26476544, 00:20:35.555 "uuid": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:35.555 "assigned_rate_limits": { 00:20:35.555 "rw_ios_per_sec": 0, 00:20:35.555 "rw_mbytes_per_sec": 0, 00:20:35.555 "r_mbytes_per_sec": 0, 00:20:35.555 "w_mbytes_per_sec": 0 00:20:35.555 }, 00:20:35.555 "claimed": false, 00:20:35.555 "zoned": false, 00:20:35.555 "supported_io_types": { 00:20:35.555 "read": true, 00:20:35.555 "write": true, 00:20:35.555 "unmap": true, 00:20:35.555 "flush": false, 00:20:35.555 "reset": true, 00:20:35.555 "nvme_admin": false, 00:20:35.555 "nvme_io": false, 00:20:35.555 "nvme_io_md": false, 00:20:35.555 "write_zeroes": true, 00:20:35.555 "zcopy": false, 00:20:35.555 "get_zone_info": false, 00:20:35.555 "zone_management": false, 00:20:35.555 "zone_append": false, 00:20:35.555 "compare": false, 00:20:35.555 "compare_and_write": false, 00:20:35.555 "abort": false, 00:20:35.555 "seek_hole": true, 00:20:35.555 "seek_data": true, 00:20:35.555 "copy": false, 00:20:35.555 "nvme_iov_md": false 00:20:35.555 }, 00:20:35.555 "driver_specific": { 00:20:35.555 "lvol": { 00:20:35.555 "lvol_store_uuid": "6eb0d872-d7df-40fa-9abe-ec79085f7c28", 00:20:35.555 "base_bdev": "nvme0n1", 00:20:35.555 "thin_provision": true, 00:20:35.555 "num_allocated_clusters": 0, 00:20:35.555 "snapshot": false, 00:20:35.555 "clone": false, 00:20:35.555 "esnap_clone": false 00:20:35.555 } 00:20:35.555 } 00:20:35.555 } 00:20:35.555 ]' 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:35.555 19:05:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:35.959 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:36.217 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:36.217 { 00:20:36.217 "name": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:36.217 "aliases": [ 00:20:36.217 "lvs/nvme0n1p0" 00:20:36.217 ], 00:20:36.217 "product_name": "Logical Volume", 00:20:36.217 "block_size": 4096, 00:20:36.217 "num_blocks": 26476544, 00:20:36.217 "uuid": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:36.217 "assigned_rate_limits": { 00:20:36.217 "rw_ios_per_sec": 0, 00:20:36.217 "rw_mbytes_per_sec": 0, 00:20:36.217 "r_mbytes_per_sec": 0, 00:20:36.217 "w_mbytes_per_sec": 0 00:20:36.217 }, 00:20:36.217 "claimed": false, 00:20:36.217 "zoned": false, 00:20:36.217 "supported_io_types": { 00:20:36.217 "read": true, 00:20:36.217 "write": true, 00:20:36.217 "unmap": true, 00:20:36.217 "flush": false, 00:20:36.217 "reset": true, 00:20:36.217 "nvme_admin": false, 00:20:36.217 "nvme_io": false, 00:20:36.217 "nvme_io_md": false, 00:20:36.217 "write_zeroes": true, 00:20:36.217 "zcopy": false, 00:20:36.217 "get_zone_info": false, 00:20:36.217 "zone_management": false, 00:20:36.217 "zone_append": false, 00:20:36.217 "compare": false, 00:20:36.217 "compare_and_write": false, 00:20:36.217 "abort": false, 00:20:36.217 "seek_hole": true, 00:20:36.217 "seek_data": true, 00:20:36.217 "copy": false, 00:20:36.217 "nvme_iov_md": false 00:20:36.217 }, 00:20:36.217 "driver_specific": { 00:20:36.217 "lvol": { 00:20:36.217 "lvol_store_uuid": "6eb0d872-d7df-40fa-9abe-ec79085f7c28", 00:20:36.217 "base_bdev": "nvme0n1", 00:20:36.217 "thin_provision": true, 00:20:36.217 "num_allocated_clusters": 0, 00:20:36.218 "snapshot": false, 00:20:36.218 "clone": false, 00:20:36.218 "esnap_clone": false 00:20:36.218 } 00:20:36.218 } 00:20:36.218 } 00:20:36.218 ]' 00:20:36.218 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:36.475 19:05:07 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:36.732 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:36.732 19:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80e09ca3-70fe-4f4a-afe1-e847189d55c6 00:20:36.990 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:36.990 { 00:20:36.990 "name": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:36.990 "aliases": [ 00:20:36.990 "lvs/nvme0n1p0" 00:20:36.990 ], 00:20:36.990 "product_name": "Logical Volume", 00:20:36.990 "block_size": 4096, 00:20:36.990 "num_blocks": 26476544, 00:20:36.990 "uuid": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:36.990 "assigned_rate_limits": { 00:20:36.990 "rw_ios_per_sec": 0, 00:20:36.990 "rw_mbytes_per_sec": 0, 00:20:36.990 "r_mbytes_per_sec": 0, 00:20:36.990 "w_mbytes_per_sec": 0 00:20:36.990 }, 00:20:36.990 "claimed": false, 00:20:36.990 "zoned": false, 00:20:36.990 "supported_io_types": { 00:20:36.990 "read": true, 00:20:36.990 "write": true, 00:20:36.990 "unmap": true, 00:20:36.990 "flush": false, 00:20:36.990 "reset": true, 00:20:36.990 "nvme_admin": false, 00:20:36.990 "nvme_io": false, 00:20:36.990 "nvme_io_md": false, 00:20:36.990 "write_zeroes": true, 00:20:36.990 "zcopy": false, 00:20:36.990 "get_zone_info": false, 00:20:36.990 "zone_management": false, 00:20:36.990 "zone_append": false, 00:20:36.990 "compare": false, 00:20:36.990 "compare_and_write": false, 00:20:36.990 "abort": false, 00:20:36.990 "seek_hole": true, 00:20:36.990 "seek_data": true, 00:20:36.990 "copy": false, 00:20:36.990 "nvme_iov_md": false 00:20:36.990 }, 00:20:36.990 "driver_specific": { 00:20:36.990 "lvol": { 00:20:36.990 "lvol_store_uuid": "6eb0d872-d7df-40fa-9abe-ec79085f7c28", 00:20:36.990 "base_bdev": "nvme0n1", 00:20:36.991 "thin_provision": true, 00:20:36.991 "num_allocated_clusters": 0, 00:20:36.991 "snapshot": false, 00:20:36.991 "clone": false, 00:20:36.991 "esnap_clone": false 00:20:36.991 } 00:20:36.991 } 00:20:36.991 } 00:20:36.991 ]' 00:20:36.991 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:36.991 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:36.991 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.248 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:37.248 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:37.248 19:05:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:37.248 19:05:08 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:37.248 19:05:08 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:37.248 19:05:08 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 80e09ca3-70fe-4f4a-afe1-e847189d55c6 -c nvc0n1p0 --l2p_dram_limit 60 00:20:37.507 [2024-11-26 19:05:08.507423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.507498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:37.507 [2024-11-26 19:05:08.507528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:37.507 [2024-11-26 19:05:08.507542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.507665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.507687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.507 [2024-11-26 19:05:08.507706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:37.507 [2024-11-26 19:05:08.507717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.507761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:37.507 [2024-11-26 19:05:08.508770] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:37.507 [2024-11-26 19:05:08.508824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.508840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.507 [2024-11-26 19:05:08.508855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:20:37.507 [2024-11-26 19:05:08.508867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.509037] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d261fc29-7c0f-41bd-998e-66153c930dc4 00:20:37.507 [2024-11-26 19:05:08.510157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.510213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:37.507 [2024-11-26 19:05:08.510231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:37.507 [2024-11-26 19:05:08.510245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.514987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.515086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.507 [2024-11-26 19:05:08.515117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:20:37.507 [2024-11-26 19:05:08.515159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.515397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.515454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.507 [2024-11-26 19:05:08.515482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:20:37.507 [2024-11-26 19:05:08.515517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.515653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.515686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:37.507 [2024-11-26 19:05:08.515702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:37.507 [2024-11-26 19:05:08.515715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.515778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.507 [2024-11-26 19:05:08.520363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.520423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.507 [2024-11-26 19:05:08.520452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:20:37.507 [2024-11-26 19:05:08.520464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.520538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.520560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:37.507 [2024-11-26 19:05:08.520576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:37.507 [2024-11-26 19:05:08.520588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.520652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:37.507 [2024-11-26 19:05:08.520850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:37.507 [2024-11-26 19:05:08.520890] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:37.507 [2024-11-26 19:05:08.520908] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:37.507 [2024-11-26 19:05:08.520931] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:37.507 [2024-11-26 19:05:08.520946] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:37.507 [2024-11-26 19:05:08.520964] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:37.507 [2024-11-26 19:05:08.520976] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:37.507 [2024-11-26 19:05:08.520991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:37.507 [2024-11-26 19:05:08.521003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:37.507 [2024-11-26 19:05:08.521028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.521040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:37.507 [2024-11-26 19:05:08.521060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:20:37.507 [2024-11-26 19:05:08.521073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.521229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.507 [2024-11-26 19:05:08.521253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:37.507 [2024-11-26 19:05:08.521270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:37.507 [2024-11-26 19:05:08.521281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.507 [2024-11-26 19:05:08.521418] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:37.507 [2024-11-26 19:05:08.521437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:37.507 [2024-11-26 19:05:08.521452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.507 [2024-11-26 19:05:08.521464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:37.507 [2024-11-26 19:05:08.521489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:37.507 [2024-11-26 19:05:08.521512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:37.507 [2024-11-26 19:05:08.521525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.507 [2024-11-26 19:05:08.521548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:37.507 [2024-11-26 19:05:08.521559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:37.507 [2024-11-26 19:05:08.521571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.507 [2024-11-26 19:05:08.521582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:37.507 [2024-11-26 19:05:08.521595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:37.507 [2024-11-26 19:05:08.521605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:37.507 [2024-11-26 19:05:08.521634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:37.507 [2024-11-26 19:05:08.521647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:37.507 [2024-11-26 19:05:08.521670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.507 [2024-11-26 19:05:08.521702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:37.507 [2024-11-26 19:05:08.521715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:37.507 [2024-11-26 19:05:08.521730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.508 [2024-11-26 19:05:08.521742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:37.508 [2024-11-26 19:05:08.521757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:37.508 [2024-11-26 19:05:08.521769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.508 [2024-11-26 19:05:08.521790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:37.508 [2024-11-26 19:05:08.521801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:37.508 [2024-11-26 19:05:08.521817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.508 [2024-11-26 19:05:08.521828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:37.508 [2024-11-26 19:05:08.521849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:37.508 [2024-11-26 19:05:08.521886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.508 [2024-11-26 19:05:08.521901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:37.508 [2024-11-26 19:05:08.521911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:37.508 [2024-11-26 19:05:08.521924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.508 [2024-11-26 19:05:08.521934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:37.508 [2024-11-26 19:05:08.521947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:37.508 [2024-11-26 19:05:08.521958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.508 [2024-11-26 19:05:08.521970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:37.508 [2024-11-26 19:05:08.521981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:37.508 [2024-11-26 19:05:08.521995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.508 [2024-11-26 19:05:08.522006] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:37.508 [2024-11-26 19:05:08.522020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:37.508 [2024-11-26 19:05:08.522031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.508 [2024-11-26 19:05:08.522044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.508 [2024-11-26 19:05:08.522055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:37.508 [2024-11-26 19:05:08.522071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:37.508 [2024-11-26 19:05:08.522081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:37.508 [2024-11-26 19:05:08.522094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:37.508 [2024-11-26 19:05:08.522105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:37.508 [2024-11-26 19:05:08.522117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:37.508 [2024-11-26 19:05:08.522137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:37.508 [2024-11-26 19:05:08.522157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:37.508 [2024-11-26 19:05:08.522202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:37.508 [2024-11-26 19:05:08.522214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:37.508 [2024-11-26 19:05:08.522227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:37.508 [2024-11-26 19:05:08.522239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:37.508 [2024-11-26 19:05:08.522253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:37.508 [2024-11-26 19:05:08.522265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:37.508 [2024-11-26 19:05:08.522278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:37.508 [2024-11-26 19:05:08.522289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:37.508 [2024-11-26 19:05:08.522305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:37.508 [2024-11-26 19:05:08.522370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:37.508 [2024-11-26 19:05:08.522388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:37.508 [2024-11-26 19:05:08.522415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:37.508 [2024-11-26 19:05:08.522427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:37.508 [2024-11-26 19:05:08.522445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:37.508 [2024-11-26 19:05:08.522459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.508 [2024-11-26 19:05:08.522476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:37.508 [2024-11-26 19:05:08.522490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:20:37.508 [2024-11-26 19:05:08.522506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.508 [2024-11-26 19:05:08.522598] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:37.508 [2024-11-26 19:05:08.522634] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:40.793 [2024-11-26 19:05:11.497926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.498063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:40.793 [2024-11-26 19:05:11.498107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2975.338 ms 00:20:40.793 [2024-11-26 19:05:11.498129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.532799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.532881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.793 [2024-11-26 19:05:11.532905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.366 ms 00:20:40.793 [2024-11-26 19:05:11.532921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.533122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.533148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.793 [2024-11-26 19:05:11.533164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:40.793 [2024-11-26 19:05:11.533206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.584945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.585024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.793 [2024-11-26 19:05:11.585047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.667 ms 00:20:40.793 [2024-11-26 19:05:11.585070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.585153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.585195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.793 [2024-11-26 19:05:11.585213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:40.793 [2024-11-26 19:05:11.585226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.585665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.585689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.793 [2024-11-26 19:05:11.585707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:40.793 [2024-11-26 19:05:11.585720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.585892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.585914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.793 [2024-11-26 19:05:11.585927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:20:40.793 [2024-11-26 19:05:11.585943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.604329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.604599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.793 [2024-11-26 19:05:11.604633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.352 ms 00:20:40.793 [2024-11-26 19:05:11.604649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.618373] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:40.793 [2024-11-26 19:05:11.632594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.632673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.793 [2024-11-26 19:05:11.632704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.776 ms 00:20:40.793 [2024-11-26 19:05:11.632717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.692784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.692871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:40.793 [2024-11-26 19:05:11.692900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.991 ms 00:20:40.793 [2024-11-26 19:05:11.692912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.693189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.693213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.793 [2024-11-26 19:05:11.693234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:20:40.793 [2024-11-26 19:05:11.693247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.725615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.725687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:40.793 [2024-11-26 19:05:11.725712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.257 ms 00:20:40.793 [2024-11-26 19:05:11.725725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.757366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.757434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:40.793 [2024-11-26 19:05:11.757459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.565 ms 00:20:40.793 [2024-11-26 19:05:11.757475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.758254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.758346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.793 [2024-11-26 19:05:11.758375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:20:40.793 [2024-11-26 19:05:11.758387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.857128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.857203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:40.793 [2024-11-26 19:05:11.857235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.639 ms 00:20:40.793 [2024-11-26 19:05:11.857248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.890391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.890463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:40.793 [2024-11-26 19:05:11.890489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.947 ms 00:20:40.793 [2024-11-26 19:05:11.890501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.922983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.923057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:40.793 [2024-11-26 19:05:11.923081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.398 ms 00:20:40.793 [2024-11-26 19:05:11.923093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.955284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.955353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.793 [2024-11-26 19:05:11.955378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.112 ms 00:20:40.793 [2024-11-26 19:05:11.955390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.955476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.955494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.793 [2024-11-26 19:05:11.955516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:40.793 [2024-11-26 19:05:11.955528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.955728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.793 [2024-11-26 19:05:11.955759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.793 [2024-11-26 19:05:11.955777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:40.793 [2024-11-26 19:05:11.955789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.793 [2024-11-26 19:05:11.957020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3449.048 ms, result 0 00:20:40.793 { 00:20:40.793 "name": "ftl0", 00:20:40.793 "uuid": "d261fc29-7c0f-41bd-998e-66153c930dc4" 00:20:40.793 } 00:20:40.793 19:05:11 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:40.793 19:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:40.793 19:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:40.793 19:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:20:40.794 19:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:40.794 19:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:40.794 19:05:11 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:41.052 19:05:12 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:41.620 [ 00:20:41.620 { 00:20:41.620 "name": "ftl0", 00:20:41.620 "aliases": [ 00:20:41.620 "d261fc29-7c0f-41bd-998e-66153c930dc4" 00:20:41.620 ], 00:20:41.620 "product_name": "FTL disk", 00:20:41.620 "block_size": 4096, 00:20:41.620 "num_blocks": 20971520, 00:20:41.620 "uuid": "d261fc29-7c0f-41bd-998e-66153c930dc4", 00:20:41.620 "assigned_rate_limits": { 00:20:41.620 "rw_ios_per_sec": 0, 00:20:41.620 "rw_mbytes_per_sec": 0, 00:20:41.620 "r_mbytes_per_sec": 0, 00:20:41.620 "w_mbytes_per_sec": 0 00:20:41.620 }, 00:20:41.620 "claimed": false, 00:20:41.620 "zoned": false, 00:20:41.620 "supported_io_types": { 00:20:41.620 "read": true, 00:20:41.620 "write": true, 00:20:41.620 "unmap": true, 00:20:41.620 "flush": true, 00:20:41.620 "reset": false, 00:20:41.620 "nvme_admin": false, 00:20:41.620 "nvme_io": false, 00:20:41.620 "nvme_io_md": false, 00:20:41.620 "write_zeroes": true, 00:20:41.620 "zcopy": false, 00:20:41.620 "get_zone_info": false, 00:20:41.620 "zone_management": false, 00:20:41.620 "zone_append": false, 00:20:41.620 "compare": false, 00:20:41.620 "compare_and_write": false, 00:20:41.620 "abort": false, 00:20:41.620 "seek_hole": false, 00:20:41.620 "seek_data": false, 00:20:41.620 "copy": false, 00:20:41.620 "nvme_iov_md": false 00:20:41.620 }, 00:20:41.620 "driver_specific": { 00:20:41.620 "ftl": { 00:20:41.620 "base_bdev": "80e09ca3-70fe-4f4a-afe1-e847189d55c6", 00:20:41.620 "cache": "nvc0n1p0" 00:20:41.620 } 00:20:41.620 } 00:20:41.620 } 00:20:41.620 ] 00:20:41.620 19:05:12 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:20:41.620 19:05:12 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:41.620 19:05:12 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:41.879 19:05:12 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:41.879 19:05:12 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:42.138 [2024-11-26 19:05:13.134309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.134382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.138 [2024-11-26 19:05:13.134405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.138 [2024-11-26 19:05:13.134423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.134467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.138 [2024-11-26 19:05:13.137861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.138040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.138 [2024-11-26 19:05:13.138087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.364 ms 00:20:42.138 [2024-11-26 19:05:13.138101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.138660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.138689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.138 [2024-11-26 19:05:13.138707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:20:42.138 [2024-11-26 19:05:13.138718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.142049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.142088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.138 [2024-11-26 19:05:13.142107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:20:42.138 [2024-11-26 19:05:13.142119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.148839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.148998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.138 [2024-11-26 19:05:13.149033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:20:42.138 [2024-11-26 19:05:13.149046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.180649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.180713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.138 [2024-11-26 19:05:13.180756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.477 ms 00:20:42.138 [2024-11-26 19:05:13.180769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.199570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.199772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.138 [2024-11-26 19:05:13.199814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.737 ms 00:20:42.138 [2024-11-26 19:05:13.199828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.200089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.200110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.138 [2024-11-26 19:05:13.200126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:42.138 [2024-11-26 19:05:13.200138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.231854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.231913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:42.138 [2024-11-26 19:05:13.231937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.677 ms 00:20:42.138 [2024-11-26 19:05:13.231949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.263300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.263365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:42.138 [2024-11-26 19:05:13.263395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.277 ms 00:20:42.138 [2024-11-26 19:05:13.263407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.294329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.294556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.138 [2024-11-26 19:05:13.294595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.842 ms 00:20:42.138 [2024-11-26 19:05:13.294609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.325854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.138 [2024-11-26 19:05:13.326071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.138 [2024-11-26 19:05:13.326112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.077 ms 00:20:42.138 [2024-11-26 19:05:13.326126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.138 [2024-11-26 19:05:13.326215] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.138 [2024-11-26 19:05:13.326243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.138 [2024-11-26 19:05:13.326359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.326992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.139 [2024-11-26 19:05:13.327325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.140 [2024-11-26 19:05:13.327698] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.140 [2024-11-26 19:05:13.327712] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d261fc29-7c0f-41bd-998e-66153c930dc4 00:20:42.140 [2024-11-26 19:05:13.327724] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.140 [2024-11-26 19:05:13.327740] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.140 [2024-11-26 19:05:13.327754] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.140 [2024-11-26 19:05:13.327768] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.140 [2024-11-26 19:05:13.327779] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.140 [2024-11-26 19:05:13.327792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.140 [2024-11-26 19:05:13.327803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.140 [2024-11-26 19:05:13.327815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.140 [2024-11-26 19:05:13.327826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.140 [2024-11-26 19:05:13.327840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.140 [2024-11-26 19:05:13.327851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.140 [2024-11-26 19:05:13.327866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:20:42.140 [2024-11-26 19:05:13.327877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.140 [2024-11-26 19:05:13.345717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.140 [2024-11-26 19:05:13.345917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.140 [2024-11-26 19:05:13.346058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.748 ms 00:20:42.140 [2024-11-26 19:05:13.346111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.140 [2024-11-26 19:05:13.346716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.140 [2024-11-26 19:05:13.346854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.140 [2024-11-26 19:05:13.346973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:42.140 [2024-11-26 19:05:13.347027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.406088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.406311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.399 [2024-11-26 19:05:13.406484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.406609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.406747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.406855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.399 [2024-11-26 19:05:13.406966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.407024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.407327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.407485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.399 [2024-11-26 19:05:13.407606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.407737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.407832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.407925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.399 [2024-11-26 19:05:13.408035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.408087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.518795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.519033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.399 [2024-11-26 19:05:13.519197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.519258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.605849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.606091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.399 [2024-11-26 19:05:13.606247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.606304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.606487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.606559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.399 [2024-11-26 19:05:13.606694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.606818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.606979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.607011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.399 [2024-11-26 19:05:13.607029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.607041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.607213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.607236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.399 [2024-11-26 19:05:13.607254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.607266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.607353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.607373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.399 [2024-11-26 19:05:13.607389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.607400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.607470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.607487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.399 [2024-11-26 19:05:13.607501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.607515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.607585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.399 [2024-11-26 19:05:13.607603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.399 [2024-11-26 19:05:13.607617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.399 [2024-11-26 19:05:13.607629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.399 [2024-11-26 19:05:13.607834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.483 ms, result 0 00:20:42.399 true 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77094 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77094 ']' 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77094 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77094 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.658 killing process with pid 77094 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77094' 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77094 00:20:42.658 19:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77094 00:20:47.929 19:05:18 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:47.929 19:05:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.930 19:05:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:47.930 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:47.930 fio-3.35 00:20:47.930 Starting 1 thread 00:20:53.195 00:20:53.195 test: (groupid=0, jobs=1): err= 0: pid=77313: Tue Nov 26 19:05:23 2024 00:20:53.195 read: IOPS=1031, BW=68.5MiB/s (71.8MB/s)(255MiB/3715msec) 00:20:53.195 slat (nsec): min=5782, max=51673, avg=7691.94, stdev=3666.89 00:20:53.195 clat (usec): min=306, max=953, avg=433.96, stdev=53.59 00:20:53.195 lat (usec): min=312, max=973, avg=441.65, stdev=54.50 00:20:53.195 clat percentiles (usec): 00:20:53.195 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 379], 00:20:53.195 | 30.00th=[ 392], 40.00th=[ 429], 50.00th=[ 441], 60.00th=[ 449], 00:20:53.195 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 515], 95.00th=[ 529], 00:20:53.195 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 685], 99.95th=[ 881], 00:20:53.195 | 99.99th=[ 955] 00:20:53.195 write: IOPS=1039, BW=69.0MiB/s (72.4MB/s)(256MiB/3711msec); 0 zone resets 00:20:53.195 slat (usec): min=19, max=100, avg=24.86, stdev= 6.43 00:20:53.195 clat (usec): min=357, max=1697, avg=485.36, stdev=62.47 00:20:53.195 lat (usec): min=378, max=1719, avg=510.22, stdev=62.56 00:20:53.195 clat percentiles (usec): 00:20:53.195 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 437], 00:20:53.195 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 478], 60.00th=[ 486], 00:20:53.195 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 578], 00:20:53.195 | 99.00th=[ 685], 99.50th=[ 725], 99.90th=[ 881], 99.95th=[ 906], 00:20:53.195 | 99.99th=[ 1696] 00:20:53.195 bw ( KiB/s): min=68136, max=72216, per=99.84%, avg=70545.14, stdev=1462.06, samples=7 00:20:53.195 iops : min= 1002, max= 1062, avg=1037.43, stdev=21.50, samples=7 00:20:53.195 lat (usec) : 500=78.36%, 750=21.39%, 1000=0.23% 00:20:53.195 lat (msec) : 2=0.01% 00:20:53.195 cpu : usr=99.03%, sys=0.19%, ctx=5, majf=0, minf=1169 00:20:53.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.195 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.195 00:20:53.195 Run status group 0 (all jobs): 00:20:53.195 READ: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=255MiB (267MB), run=3715-3715msec 00:20:53.195 WRITE: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=256MiB (269MB), run=3711-3711msec 00:20:54.571 ----------------------------------------------------- 00:20:54.571 Suppressions used: 00:20:54.571 count bytes template 00:20:54.571 1 5 /usr/src/fio/parse.c 00:20:54.571 1 8 libtcmalloc_minimal.so 00:20:54.571 1 904 libcrypto.so 00:20:54.571 ----------------------------------------------------- 00:20:54.571 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.571 19:05:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:54.571 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:54.571 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:54.571 fio-3.35 00:20:54.571 Starting 2 threads 00:21:33.340 00:21:33.340 first_half: (groupid=0, jobs=1): err= 0: pid=77412: Tue Nov 26 19:05:58 2024 00:21:33.340 read: IOPS=2055, BW=8220KiB/s (8418kB/s)(256MiB/31856msec) 00:21:33.340 slat (nsec): min=4735, max=57631, avg=8852.56, stdev=3248.68 00:21:33.340 clat (usec): min=708, max=401940, avg=52069.94, stdev=33038.13 00:21:33.340 lat (usec): min=713, max=401953, avg=52078.79, stdev=33038.71 00:21:33.340 clat percentiles (msec): 00:21:33.340 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:21:33.340 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 46], 00:21:33.340 | 70.00th=[ 50], 80.00th=[ 54], 90.00th=[ 65], 95.00th=[ 107], 00:21:33.340 | 99.00th=[ 215], 99.50th=[ 232], 99.90th=[ 279], 99.95th=[ 305], 00:21:33.340 | 99.99th=[ 384] 00:21:33.340 write: IOPS=2059, BW=8239KiB/s (8437kB/s)(256MiB/31816msec); 0 zone resets 00:21:33.340 slat (usec): min=5, max=681, avg= 9.75, stdev= 6.59 00:21:33.340 clat (usec): min=460, max=84682, avg=10163.65, stdev=11389.26 00:21:33.340 lat (usec): min=469, max=84693, avg=10173.40, stdev=11389.61 00:21:33.340 clat percentiles (usec): 00:21:33.340 | 1.00th=[ 1237], 5.00th=[ 1565], 10.00th=[ 1860], 20.00th=[ 3228], 00:21:33.340 | 30.00th=[ 4490], 40.00th=[ 5669], 50.00th=[ 6718], 60.00th=[ 7767], 00:21:33.340 | 70.00th=[ 9372], 80.00th=[14091], 90.00th=[20579], 95.00th=[35914], 00:21:33.340 | 99.00th=[57934], 99.50th=[63177], 99.90th=[74974], 99.95th=[78119], 00:21:33.340 | 99.99th=[84411] 00:21:33.340 bw ( KiB/s): min= 1507, max=40032, per=100.00%, avg=21734.00, stdev=11532.50, samples=24 00:21:33.340 iops : min= 376, max=10008, avg=5433.42, stdev=2883.19, samples=24 00:21:33.340 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.10% 00:21:33.340 lat (msec) : 2=5.72%, 4=6.76%, 10=23.72%, 20=9.99%, 50=39.13% 00:21:33.340 lat (msec) : 100=11.85%, 250=2.54%, 500=0.14% 00:21:33.340 cpu : usr=98.93%, sys=0.13%, ctx=74, majf=0, minf=5533 00:21:33.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:33.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.341 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.341 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.341 second_half: (groupid=0, jobs=1): err= 0: pid=77413: Tue Nov 26 19:05:58 2024 00:21:33.341 read: IOPS=2070, BW=8281KiB/s (8479kB/s)(256MiB/31635msec) 00:21:33.341 slat (usec): min=4, max=120, avg= 9.07, stdev= 3.19 00:21:33.341 clat (msec): min=12, max=388, avg=53.08, stdev=31.97 00:21:33.341 lat (msec): min=12, max=388, avg=53.09, stdev=31.97 00:21:33.341 clat percentiles (msec): 00:21:33.341 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:21:33.341 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 44], 60.00th=[ 47], 00:21:33.341 | 70.00th=[ 50], 80.00th=[ 56], 90.00th=[ 68], 95.00th=[ 96], 00:21:33.341 | 99.00th=[ 213], 99.50th=[ 249], 99.90th=[ 334], 99.95th=[ 384], 00:21:33.341 | 99.99th=[ 388] 00:21:33.341 write: IOPS=2081, BW=8326KiB/s (8526kB/s)(256MiB/31484msec); 0 zone resets 00:21:33.341 slat (usec): min=6, max=225, avg= 9.96, stdev= 6.01 00:21:33.341 clat (usec): min=480, max=107557, avg=8717.51, stdev=8111.08 00:21:33.341 lat (usec): min=495, max=107567, avg=8727.48, stdev=8111.68 00:21:33.341 clat percentiles (usec): 00:21:33.341 | 1.00th=[ 1369], 5.00th=[ 2376], 10.00th=[ 3130], 20.00th=[ 4047], 00:21:33.341 | 30.00th=[ 5211], 40.00th=[ 6194], 50.00th=[ 7046], 60.00th=[ 7767], 00:21:33.341 | 70.00th=[ 8848], 80.00th=[ 10552], 90.00th=[ 15795], 95.00th=[ 22676], 00:21:33.341 | 99.00th=[ 40109], 99.50th=[ 45351], 99.90th=[101188], 99.95th=[104334], 00:21:33.341 | 99.99th=[106431] 00:21:33.341 bw ( KiB/s): min= 16, max=44592, per=100.00%, avg=20975.04, stdev=14278.46, samples=25 00:21:33.341 iops : min= 4, max=11148, avg=5243.56, stdev=3569.74, samples=25 00:21:33.341 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.11% 00:21:33.341 lat (msec) : 2=1.41%, 4=8.15%, 10=29.34%, 20=8.22%, 50=38.02% 00:21:33.341 lat (msec) : 100=12.29%, 250=2.17%, 500=0.24% 00:21:33.341 cpu : usr=98.95%, sys=0.14%, ctx=49, majf=0, minf=5580 00:21:33.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:33.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.341 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.341 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.341 00:21:33.341 Run status group 0 (all jobs): 00:21:33.341 READ: bw=16.1MiB/s (16.8MB/s), 8220KiB/s-8281KiB/s (8418kB/s-8479kB/s), io=512MiB (536MB), run=31635-31856msec 00:21:33.341 WRITE: bw=16.1MiB/s (16.9MB/s), 8239KiB/s-8326KiB/s (8437kB/s-8526kB/s), io=512MiB (537MB), run=31484-31816msec 00:21:33.341 ----------------------------------------------------- 00:21:33.341 Suppressions used: 00:21:33.341 count bytes template 00:21:33.341 2 10 /usr/src/fio/parse.c 00:21:33.341 3 288 /usr/src/fio/iolog.c 00:21:33.341 1 8 libtcmalloc_minimal.so 00:21:33.341 1 904 libcrypto.so 00:21:33.341 ----------------------------------------------------- 00:21:33.341 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:33.341 19:06:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:33.341 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:33.341 fio-3.35 00:21:33.341 Starting 1 thread 00:21:51.420 00:21:51.420 test: (groupid=0, jobs=1): err= 0: pid=77799: Tue Nov 26 19:06:20 2024 00:21:51.420 read: IOPS=5784, BW=22.6MiB/s (23.7MB/s)(255MiB/11271msec) 00:21:51.420 slat (nsec): min=4624, max=68304, avg=7343.37, stdev=2318.94 00:21:51.420 clat (usec): min=753, max=43384, avg=22114.72, stdev=2928.97 00:21:51.420 lat (usec): min=758, max=43389, avg=22122.07, stdev=2928.97 00:21:51.420 clat percentiles (usec): 00:21:51.420 | 1.00th=[19530], 5.00th=[19792], 10.00th=[19792], 20.00th=[20055], 00:21:51.420 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:21:51.420 | 70.00th=[22414], 80.00th=[23200], 90.00th=[25822], 95.00th=[28443], 00:21:51.420 | 99.00th=[33817], 99.50th=[34341], 99.90th=[40109], 99.95th=[42206], 00:21:51.420 | 99.99th=[42730] 00:21:51.420 write: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(256MiB/5851msec); 0 zone resets 00:21:51.420 slat (usec): min=6, max=195, avg= 9.73, stdev= 4.74 00:21:51.420 clat (usec): min=639, max=70637, avg=11364.32, stdev=14348.05 00:21:51.420 lat (usec): min=647, max=70646, avg=11374.05, stdev=14348.01 00:21:51.420 clat percentiles (usec): 00:21:51.420 | 1.00th=[ 1012], 5.00th=[ 1205], 10.00th=[ 1336], 20.00th=[ 1549], 00:21:51.420 | 30.00th=[ 1778], 40.00th=[ 2409], 50.00th=[ 7373], 60.00th=[ 8586], 00:21:51.420 | 70.00th=[10028], 80.00th=[11600], 90.00th=[40109], 95.00th=[45876], 00:21:51.420 | 99.00th=[50594], 99.50th=[52691], 99.90th=[57410], 99.95th=[59507], 00:21:51.420 | 99.99th=[66847] 00:21:51.420 bw ( KiB/s): min=25024, max=64888, per=97.52%, avg=43690.67, stdev=10749.16, samples=12 00:21:51.420 iops : min= 6256, max=16222, avg=10922.67, stdev=2687.29, samples=12 00:21:51.420 lat (usec) : 750=0.01%, 1000=0.43% 00:21:51.420 lat (msec) : 2=17.75%, 4=2.68%, 10=14.31%, 20=13.32%, 50=50.88% 00:21:51.420 lat (msec) : 100=0.62% 00:21:51.420 cpu : usr=98.84%, sys=0.23%, ctx=23, majf=0, minf=5565 00:21:51.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:51.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.420 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.420 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.420 00:21:51.420 Run status group 0 (all jobs): 00:21:51.420 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=255MiB (267MB), run=11271-11271msec 00:21:51.420 WRITE: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=256MiB (268MB), run=5851-5851msec 00:21:51.420 ----------------------------------------------------- 00:21:51.420 Suppressions used: 00:21:51.420 count bytes template 00:21:51.420 1 5 /usr/src/fio/parse.c 00:21:51.420 2 192 /usr/src/fio/iolog.c 00:21:51.420 1 8 libtcmalloc_minimal.so 00:21:51.420 1 904 libcrypto.so 00:21:51.420 ----------------------------------------------------- 00:21:51.420 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:51.420 Remove shared memory files 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58268 /dev/shm/spdk_tgt_trace.pid76017 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:51.420 00:21:51.420 real 1m18.791s 00:21:51.420 user 2m58.155s 00:21:51.420 sys 0m3.908s 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.420 19:06:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:51.420 ************************************ 00:21:51.420 END TEST ftl_fio_basic 00:21:51.420 ************************************ 00:21:51.420 19:06:21 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:51.420 19:06:21 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:51.420 19:06:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.420 19:06:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:51.420 ************************************ 00:21:51.420 START TEST ftl_bdevperf 00:21:51.420 ************************************ 00:21:51.420 19:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:51.420 * Looking for test storage... 00:21:51.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:51.420 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.421 --rc genhtml_branch_coverage=1 00:21:51.421 --rc genhtml_function_coverage=1 00:21:51.421 --rc genhtml_legend=1 00:21:51.421 --rc geninfo_all_blocks=1 00:21:51.421 --rc geninfo_unexecuted_blocks=1 00:21:51.421 00:21:51.421 ' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.421 --rc genhtml_branch_coverage=1 00:21:51.421 --rc genhtml_function_coverage=1 00:21:51.421 --rc genhtml_legend=1 00:21:51.421 --rc geninfo_all_blocks=1 00:21:51.421 --rc geninfo_unexecuted_blocks=1 00:21:51.421 00:21:51.421 ' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.421 --rc genhtml_branch_coverage=1 00:21:51.421 --rc genhtml_function_coverage=1 00:21:51.421 --rc genhtml_legend=1 00:21:51.421 --rc geninfo_all_blocks=1 00:21:51.421 --rc geninfo_unexecuted_blocks=1 00:21:51.421 00:21:51.421 ' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.421 --rc genhtml_branch_coverage=1 00:21:51.421 --rc genhtml_function_coverage=1 00:21:51.421 --rc genhtml_legend=1 00:21:51.421 --rc geninfo_all_blocks=1 00:21:51.421 --rc geninfo_unexecuted_blocks=1 00:21:51.421 00:21:51.421 ' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78071 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78071 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78071 ']' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.421 19:06:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:51.421 [2024-11-26 19:06:22.283727] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:21:51.421 [2024-11-26 19:06:22.283937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78071 ] 00:21:51.421 [2024-11-26 19:06:22.475305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.421 [2024-11-26 19:06:22.601926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:52.355 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:52.613 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:52.871 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:52.871 { 00:21:52.871 "name": "nvme0n1", 00:21:52.871 "aliases": [ 00:21:52.871 "83e74d61-c292-484b-9b55-acd661ad2fde" 00:21:52.871 ], 00:21:52.871 "product_name": "NVMe disk", 00:21:52.871 "block_size": 4096, 00:21:52.871 "num_blocks": 1310720, 00:21:52.871 "uuid": "83e74d61-c292-484b-9b55-acd661ad2fde", 00:21:52.871 "numa_id": -1, 00:21:52.871 "assigned_rate_limits": { 00:21:52.871 "rw_ios_per_sec": 0, 00:21:52.871 "rw_mbytes_per_sec": 0, 00:21:52.871 "r_mbytes_per_sec": 0, 00:21:52.871 "w_mbytes_per_sec": 0 00:21:52.871 }, 00:21:52.871 "claimed": true, 00:21:52.871 "claim_type": "read_many_write_one", 00:21:52.871 "zoned": false, 00:21:52.871 "supported_io_types": { 00:21:52.871 "read": true, 00:21:52.871 "write": true, 00:21:52.871 "unmap": true, 00:21:52.871 "flush": true, 00:21:52.871 "reset": true, 00:21:52.871 "nvme_admin": true, 00:21:52.871 "nvme_io": true, 00:21:52.871 "nvme_io_md": false, 00:21:52.871 "write_zeroes": true, 00:21:52.871 "zcopy": false, 00:21:52.871 "get_zone_info": false, 00:21:52.871 "zone_management": false, 00:21:52.871 "zone_append": false, 00:21:52.871 "compare": true, 00:21:52.871 "compare_and_write": false, 00:21:52.871 "abort": true, 00:21:52.871 "seek_hole": false, 00:21:52.871 "seek_data": false, 00:21:52.871 "copy": true, 00:21:52.871 "nvme_iov_md": false 00:21:52.871 }, 00:21:52.871 "driver_specific": { 00:21:52.871 "nvme": [ 00:21:52.871 { 00:21:52.871 "pci_address": "0000:00:11.0", 00:21:52.871 "trid": { 00:21:52.871 "trtype": "PCIe", 00:21:52.871 "traddr": "0000:00:11.0" 00:21:52.871 }, 00:21:52.871 "ctrlr_data": { 00:21:52.871 "cntlid": 0, 00:21:52.871 "vendor_id": "0x1b36", 00:21:52.871 "model_number": "QEMU NVMe Ctrl", 00:21:52.871 "serial_number": "12341", 00:21:52.871 "firmware_revision": "8.0.0", 00:21:52.871 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:52.871 "oacs": { 00:21:52.871 "security": 0, 00:21:52.871 "format": 1, 00:21:52.871 "firmware": 0, 00:21:52.871 "ns_manage": 1 00:21:52.872 }, 00:21:52.872 "multi_ctrlr": false, 00:21:52.872 "ana_reporting": false 00:21:52.872 }, 00:21:52.872 "vs": { 00:21:52.872 "nvme_version": "1.4" 00:21:52.872 }, 00:21:52.872 "ns_data": { 00:21:52.872 "id": 1, 00:21:52.872 "can_share": false 00:21:52.872 } 00:21:52.872 } 00:21:52.872 ], 00:21:52.872 "mp_policy": "active_passive" 00:21:52.872 } 00:21:52.872 } 00:21:52.872 ]' 00:21:52.872 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:52.872 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:52.872 19:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:52.872 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:53.437 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=6eb0d872-d7df-40fa-9abe-ec79085f7c28 00:21:53.437 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:53.437 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6eb0d872-d7df-40fa-9abe-ec79085f7c28 00:21:53.695 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:53.955 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f4468de6-e46e-4788-9e96-1318eda70703 00:21:53.955 19:06:24 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f4468de6-e46e-4788-9e96-1318eda70703 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:54.214 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.473 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:54.473 { 00:21:54.473 "name": "317746b8-83a9-46ca-aacb-c61760727cef", 00:21:54.473 "aliases": [ 00:21:54.474 "lvs/nvme0n1p0" 00:21:54.474 ], 00:21:54.474 "product_name": "Logical Volume", 00:21:54.474 "block_size": 4096, 00:21:54.474 "num_blocks": 26476544, 00:21:54.474 "uuid": "317746b8-83a9-46ca-aacb-c61760727cef", 00:21:54.474 "assigned_rate_limits": { 00:21:54.474 "rw_ios_per_sec": 0, 00:21:54.474 "rw_mbytes_per_sec": 0, 00:21:54.474 "r_mbytes_per_sec": 0, 00:21:54.474 "w_mbytes_per_sec": 0 00:21:54.474 }, 00:21:54.474 "claimed": false, 00:21:54.474 "zoned": false, 00:21:54.474 "supported_io_types": { 00:21:54.474 "read": true, 00:21:54.474 "write": true, 00:21:54.474 "unmap": true, 00:21:54.474 "flush": false, 00:21:54.474 "reset": true, 00:21:54.474 "nvme_admin": false, 00:21:54.474 "nvme_io": false, 00:21:54.474 "nvme_io_md": false, 00:21:54.474 "write_zeroes": true, 00:21:54.474 "zcopy": false, 00:21:54.474 "get_zone_info": false, 00:21:54.474 "zone_management": false, 00:21:54.474 "zone_append": false, 00:21:54.474 "compare": false, 00:21:54.474 "compare_and_write": false, 00:21:54.474 "abort": false, 00:21:54.474 "seek_hole": true, 00:21:54.474 "seek_data": true, 00:21:54.474 "copy": false, 00:21:54.474 "nvme_iov_md": false 00:21:54.474 }, 00:21:54.474 "driver_specific": { 00:21:54.474 "lvol": { 00:21:54.474 "lvol_store_uuid": "f4468de6-e46e-4788-9e96-1318eda70703", 00:21:54.474 "base_bdev": "nvme0n1", 00:21:54.474 "thin_provision": true, 00:21:54.474 "num_allocated_clusters": 0, 00:21:54.474 "snapshot": false, 00:21:54.474 "clone": false, 00:21:54.474 "esnap_clone": false 00:21:54.474 } 00:21:54.474 } 00:21:54.474 } 00:21:54.474 ]' 00:21:54.474 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:54.474 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:54.474 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:54.735 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:54.735 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:54.735 19:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:54.735 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:54.735 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:54.735 19:06:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=317746b8-83a9-46ca-aacb-c61760727cef 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:54.993 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 317746b8-83a9-46ca-aacb-c61760727cef 00:21:55.251 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:55.251 { 00:21:55.251 "name": "317746b8-83a9-46ca-aacb-c61760727cef", 00:21:55.251 "aliases": [ 00:21:55.251 "lvs/nvme0n1p0" 00:21:55.251 ], 00:21:55.251 "product_name": "Logical Volume", 00:21:55.251 "block_size": 4096, 00:21:55.251 "num_blocks": 26476544, 00:21:55.251 "uuid": "317746b8-83a9-46ca-aacb-c61760727cef", 00:21:55.251 "assigned_rate_limits": { 00:21:55.251 "rw_ios_per_sec": 0, 00:21:55.251 "rw_mbytes_per_sec": 0, 00:21:55.251 "r_mbytes_per_sec": 0, 00:21:55.251 "w_mbytes_per_sec": 0 00:21:55.251 }, 00:21:55.251 "claimed": false, 00:21:55.251 "zoned": false, 00:21:55.251 "supported_io_types": { 00:21:55.251 "read": true, 00:21:55.251 "write": true, 00:21:55.251 "unmap": true, 00:21:55.251 "flush": false, 00:21:55.251 "reset": true, 00:21:55.251 "nvme_admin": false, 00:21:55.251 "nvme_io": false, 00:21:55.251 "nvme_io_md": false, 00:21:55.251 "write_zeroes": true, 00:21:55.251 "zcopy": false, 00:21:55.251 "get_zone_info": false, 00:21:55.251 "zone_management": false, 00:21:55.251 "zone_append": false, 00:21:55.251 "compare": false, 00:21:55.251 "compare_and_write": false, 00:21:55.251 "abort": false, 00:21:55.251 "seek_hole": true, 00:21:55.251 "seek_data": true, 00:21:55.251 "copy": false, 00:21:55.251 "nvme_iov_md": false 00:21:55.252 }, 00:21:55.252 "driver_specific": { 00:21:55.252 "lvol": { 00:21:55.252 "lvol_store_uuid": "f4468de6-e46e-4788-9e96-1318eda70703", 00:21:55.252 "base_bdev": "nvme0n1", 00:21:55.252 "thin_provision": true, 00:21:55.252 "num_allocated_clusters": 0, 00:21:55.252 "snapshot": false, 00:21:55.252 "clone": false, 00:21:55.252 "esnap_clone": false 00:21:55.252 } 00:21:55.252 } 00:21:55.252 } 00:21:55.252 ]' 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:55.252 19:06:26 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 317746b8-83a9-46ca-aacb-c61760727cef 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=317746b8-83a9-46ca-aacb-c61760727cef 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:55.817 19:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 317746b8-83a9-46ca-aacb-c61760727cef 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:56.075 { 00:21:56.075 "name": "317746b8-83a9-46ca-aacb-c61760727cef", 00:21:56.075 "aliases": [ 00:21:56.075 "lvs/nvme0n1p0" 00:21:56.075 ], 00:21:56.075 "product_name": "Logical Volume", 00:21:56.075 "block_size": 4096, 00:21:56.075 "num_blocks": 26476544, 00:21:56.075 "uuid": "317746b8-83a9-46ca-aacb-c61760727cef", 00:21:56.075 "assigned_rate_limits": { 00:21:56.075 "rw_ios_per_sec": 0, 00:21:56.075 "rw_mbytes_per_sec": 0, 00:21:56.075 "r_mbytes_per_sec": 0, 00:21:56.075 "w_mbytes_per_sec": 0 00:21:56.075 }, 00:21:56.075 "claimed": false, 00:21:56.075 "zoned": false, 00:21:56.075 "supported_io_types": { 00:21:56.075 "read": true, 00:21:56.075 "write": true, 00:21:56.075 "unmap": true, 00:21:56.075 "flush": false, 00:21:56.075 "reset": true, 00:21:56.075 "nvme_admin": false, 00:21:56.075 "nvme_io": false, 00:21:56.075 "nvme_io_md": false, 00:21:56.075 "write_zeroes": true, 00:21:56.075 "zcopy": false, 00:21:56.075 "get_zone_info": false, 00:21:56.075 "zone_management": false, 00:21:56.075 "zone_append": false, 00:21:56.075 "compare": false, 00:21:56.075 "compare_and_write": false, 00:21:56.075 "abort": false, 00:21:56.075 "seek_hole": true, 00:21:56.075 "seek_data": true, 00:21:56.075 "copy": false, 00:21:56.075 "nvme_iov_md": false 00:21:56.075 }, 00:21:56.075 "driver_specific": { 00:21:56.075 "lvol": { 00:21:56.075 "lvol_store_uuid": "f4468de6-e46e-4788-9e96-1318eda70703", 00:21:56.075 "base_bdev": "nvme0n1", 00:21:56.075 "thin_provision": true, 00:21:56.075 "num_allocated_clusters": 0, 00:21:56.075 "snapshot": false, 00:21:56.075 "clone": false, 00:21:56.075 "esnap_clone": false 00:21:56.075 } 00:21:56.075 } 00:21:56.075 } 00:21:56.075 ]' 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:56.075 19:06:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 317746b8-83a9-46ca-aacb-c61760727cef -c nvc0n1p0 --l2p_dram_limit 20 00:21:56.335 [2024-11-26 19:06:27.532509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.532589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:56.335 [2024-11-26 19:06:27.532612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:56.335 [2024-11-26 19:06:27.532627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.532710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.532733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.335 [2024-11-26 19:06:27.532746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:56.335 [2024-11-26 19:06:27.532760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.532788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:56.335 [2024-11-26 19:06:27.533845] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:56.335 [2024-11-26 19:06:27.533887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.533907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.335 [2024-11-26 19:06:27.533929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:21:56.335 [2024-11-26 19:06:27.533952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.534189] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b600cf61-9a0e-484b-a7a9-240c80794600 00:21:56.335 [2024-11-26 19:06:27.535291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.535334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:56.335 [2024-11-26 19:06:27.535358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:56.335 [2024-11-26 19:06:27.535370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.540310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.540364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.335 [2024-11-26 19:06:27.540385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.879 ms 00:21:56.335 [2024-11-26 19:06:27.540401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.540571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.540594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.335 [2024-11-26 19:06:27.540615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:56.335 [2024-11-26 19:06:27.540634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.540708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.540737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:56.335 [2024-11-26 19:06:27.540754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:56.335 [2024-11-26 19:06:27.540766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.540805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.335 [2024-11-26 19:06:27.545477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.545525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.335 [2024-11-26 19:06:27.545542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.687 ms 00:21:56.335 [2024-11-26 19:06:27.545562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.545606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.545625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:56.335 [2024-11-26 19:06:27.545638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:56.335 [2024-11-26 19:06:27.545652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.545718] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:56.335 [2024-11-26 19:06:27.545890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:56.335 [2024-11-26 19:06:27.545921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:56.335 [2024-11-26 19:06:27.545941] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:56.335 [2024-11-26 19:06:27.545957] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:56.335 [2024-11-26 19:06:27.545973] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:56.335 [2024-11-26 19:06:27.545986] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:56.335 [2024-11-26 19:06:27.545999] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:56.335 [2024-11-26 19:06:27.546010] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:56.335 [2024-11-26 19:06:27.546023] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:56.335 [2024-11-26 19:06:27.546050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.546063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:56.335 [2024-11-26 19:06:27.546075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:21:56.335 [2024-11-26 19:06:27.546088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.335 [2024-11-26 19:06:27.546198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.335 [2024-11-26 19:06:27.546223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:56.336 [2024-11-26 19:06:27.546237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:56.336 [2024-11-26 19:06:27.546252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.336 [2024-11-26 19:06:27.546357] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:56.336 [2024-11-26 19:06:27.546391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:56.336 [2024-11-26 19:06:27.546405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:56.336 [2024-11-26 19:06:27.546443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:56.336 [2024-11-26 19:06:27.546478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.336 [2024-11-26 19:06:27.546501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:56.336 [2024-11-26 19:06:27.546527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:56.336 [2024-11-26 19:06:27.546538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.336 [2024-11-26 19:06:27.546560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:56.336 [2024-11-26 19:06:27.546571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:56.336 [2024-11-26 19:06:27.546586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:56.336 [2024-11-26 19:06:27.546610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:56.336 [2024-11-26 19:06:27.546646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:56.336 [2024-11-26 19:06:27.546684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:56.336 [2024-11-26 19:06:27.546716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:56.336 [2024-11-26 19:06:27.546752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:56.336 [2024-11-26 19:06:27.546788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.336 [2024-11-26 19:06:27.546811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:56.336 [2024-11-26 19:06:27.546823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:56.336 [2024-11-26 19:06:27.546833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.336 [2024-11-26 19:06:27.546845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:56.336 [2024-11-26 19:06:27.546856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:56.336 [2024-11-26 19:06:27.546868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:56.336 [2024-11-26 19:06:27.546891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:56.336 [2024-11-26 19:06:27.546902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546914] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:56.336 [2024-11-26 19:06:27.546926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:56.336 [2024-11-26 19:06:27.546939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.336 [2024-11-26 19:06:27.546950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.336 [2024-11-26 19:06:27.546968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:56.336 [2024-11-26 19:06:27.546979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:56.336 [2024-11-26 19:06:27.546991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:56.336 [2024-11-26 19:06:27.547002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:56.336 [2024-11-26 19:06:27.547014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:56.336 [2024-11-26 19:06:27.547025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:56.336 [2024-11-26 19:06:27.547042] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:56.336 [2024-11-26 19:06:27.547058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:56.336 [2024-11-26 19:06:27.547086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:56.336 [2024-11-26 19:06:27.547100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:56.336 [2024-11-26 19:06:27.547111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:56.336 [2024-11-26 19:06:27.547125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:56.336 [2024-11-26 19:06:27.547145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:56.336 [2024-11-26 19:06:27.547159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:56.336 [2024-11-26 19:06:27.547193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:56.336 [2024-11-26 19:06:27.547213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:56.336 [2024-11-26 19:06:27.547225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:56.336 [2024-11-26 19:06:27.547289] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:56.336 [2024-11-26 19:06:27.547302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:56.336 [2024-11-26 19:06:27.547335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:56.336 [2024-11-26 19:06:27.547348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:56.336 [2024-11-26 19:06:27.547360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:56.336 [2024-11-26 19:06:27.547375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.336 [2024-11-26 19:06:27.547387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:56.336 [2024-11-26 19:06:27.547401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:21:56.336 [2024-11-26 19:06:27.547412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.336 [2024-11-26 19:06:27.547463] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:56.336 [2024-11-26 19:06:27.547480] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:58.313 [2024-11-26 19:06:29.502973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.313 [2024-11-26 19:06:29.503076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:58.313 [2024-11-26 19:06:29.503103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1955.510 ms 00:21:58.313 [2024-11-26 19:06:29.503117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.603 [2024-11-26 19:06:29.537221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.603 [2024-11-26 19:06:29.537296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.603 [2024-11-26 19:06:29.537321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.785 ms 00:21:58.603 [2024-11-26 19:06:29.537335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.603 [2024-11-26 19:06:29.537528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.603 [2024-11-26 19:06:29.537549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.603 [2024-11-26 19:06:29.537568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:58.603 [2024-11-26 19:06:29.537581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.603 [2024-11-26 19:06:29.600925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.603 [2024-11-26 19:06:29.601016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.604 [2024-11-26 19:06:29.601045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.246 ms 00:21:58.604 [2024-11-26 19:06:29.601060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.601144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.601163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.604 [2024-11-26 19:06:29.601200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:58.604 [2024-11-26 19:06:29.601219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.601680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.601715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.604 [2024-11-26 19:06:29.601737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:21:58.604 [2024-11-26 19:06:29.601751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.601931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.601963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.604 [2024-11-26 19:06:29.601985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:21:58.604 [2024-11-26 19:06:29.601999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.622278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.622360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.604 [2024-11-26 19:06:29.622388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.240 ms 00:21:58.604 [2024-11-26 19:06:29.622428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.638981] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:58.604 [2024-11-26 19:06:29.645039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.645136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.604 [2024-11-26 19:06:29.645162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.447 ms 00:21:58.604 [2024-11-26 19:06:29.645203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.710882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.710991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:58.604 [2024-11-26 19:06:29.711017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.605 ms 00:21:58.604 [2024-11-26 19:06:29.711035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.711384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.711434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.604 [2024-11-26 19:06:29.711463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:21:58.604 [2024-11-26 19:06:29.711518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.756222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.756336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:58.604 [2024-11-26 19:06:29.756362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.554 ms 00:21:58.604 [2024-11-26 19:06:29.756380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.799937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.800093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:58.604 [2024-11-26 19:06:29.800139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.452 ms 00:21:58.604 [2024-11-26 19:06:29.800194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.604 [2024-11-26 19:06:29.801543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.604 [2024-11-26 19:06:29.801623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.604 [2024-11-26 19:06:29.801657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:21:58.604 [2024-11-26 19:06:29.801686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.860 [2024-11-26 19:06:29.919214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.860 [2024-11-26 19:06:29.919327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:58.860 [2024-11-26 19:06:29.919353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.355 ms 00:21:58.860 [2024-11-26 19:06:29.919371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.860 [2024-11-26 19:06:29.961960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.860 [2024-11-26 19:06:29.962089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:58.861 [2024-11-26 19:06:29.962121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.303 ms 00:21:58.861 [2024-11-26 19:06:29.962139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.861 [2024-11-26 19:06:30.003396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.861 [2024-11-26 19:06:30.003563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:58.861 [2024-11-26 19:06:30.003596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.124 ms 00:21:58.861 [2024-11-26 19:06:30.003613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.861 [2024-11-26 19:06:30.054744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.861 [2024-11-26 19:06:30.054869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.861 [2024-11-26 19:06:30.054896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.025 ms 00:21:58.861 [2024-11-26 19:06:30.054916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.861 [2024-11-26 19:06:30.055051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.861 [2024-11-26 19:06:30.055084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.861 [2024-11-26 19:06:30.055101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:58.861 [2024-11-26 19:06:30.055117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.861 [2024-11-26 19:06:30.055340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.861 [2024-11-26 19:06:30.055383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.861 [2024-11-26 19:06:30.055401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:58.861 [2024-11-26 19:06:30.055417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.861 [2024-11-26 19:06:30.056841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2523.644 ms, result 0 00:21:58.861 { 00:21:58.861 "name": "ftl0", 00:21:58.861 "uuid": "b600cf61-9a0e-484b-a7a9-240c80794600" 00:21:58.861 } 00:21:59.118 19:06:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:59.118 19:06:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:59.118 19:06:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:59.376 19:06:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:59.376 [2024-11-26 19:06:30.521035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:59.376 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:59.376 Zero copy mechanism will not be used. 00:21:59.376 Running I/O for 4 seconds... 00:22:01.682 2176.00 IOPS, 144.50 MiB/s [2024-11-26T19:06:33.831Z] 2224.00 IOPS, 147.69 MiB/s [2024-11-26T19:06:34.765Z] 2208.67 IOPS, 146.67 MiB/s [2024-11-26T19:06:34.765Z] 2177.00 IOPS, 144.57 MiB/s 00:22:03.550 Latency(us) 00:22:03.550 [2024-11-26T19:06:34.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.550 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:03.550 ftl0 : 4.00 2176.09 144.51 0.00 0.00 481.55 222.49 2427.81 00:22:03.550 [2024-11-26T19:06:34.765Z] =================================================================================================================== 00:22:03.550 [2024-11-26T19:06:34.765Z] Total : 2176.09 144.51 0.00 0.00 481.55 222.49 2427.81 00:22:03.550 [2024-11-26 19:06:34.533277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:03.550 { 00:22:03.550 "results": [ 00:22:03.550 { 00:22:03.550 "job": "ftl0", 00:22:03.550 "core_mask": "0x1", 00:22:03.550 "workload": "randwrite", 00:22:03.550 "status": "finished", 00:22:03.550 "queue_depth": 1, 00:22:03.550 "io_size": 69632, 00:22:03.550 "runtime": 4.002141, 00:22:03.550 "iops": 2176.0852503697397, 00:22:03.550 "mibps": 144.50566115736552, 00:22:03.550 "io_failed": 0, 00:22:03.550 "io_timeout": 0, 00:22:03.550 "avg_latency_us": 481.5492570903663, 00:22:03.550 "min_latency_us": 222.48727272727274, 00:22:03.550 "max_latency_us": 2427.8109090909093 00:22:03.550 } 00:22:03.550 ], 00:22:03.550 "core_count": 1 00:22:03.550 } 00:22:03.550 19:06:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:03.550 [2024-11-26 19:06:34.692784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:03.550 Running I/O for 4 seconds... 00:22:05.490 6976.00 IOPS, 27.25 MiB/s [2024-11-26T19:06:38.080Z] 6514.50 IOPS, 25.45 MiB/s [2024-11-26T19:06:39.015Z] 6460.67 IOPS, 25.24 MiB/s [2024-11-26T19:06:39.015Z] 6376.25 IOPS, 24.91 MiB/s 00:22:07.800 Latency(us) 00:22:07.800 [2024-11-26T19:06:39.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.800 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:07.800 ftl0 : 4.02 6370.48 24.88 0.00 0.00 20033.83 394.71 45756.04 00:22:07.800 [2024-11-26T19:06:39.015Z] =================================================================================================================== 00:22:07.800 [2024-11-26T19:06:39.015Z] Total : 6370.48 24.88 0.00 0.00 20033.83 0.00 45756.04 00:22:07.800 [2024-11-26 19:06:38.727668] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:07.800 { 00:22:07.800 "results": [ 00:22:07.800 { 00:22:07.800 "job": "ftl0", 00:22:07.800 "core_mask": "0x1", 00:22:07.800 "workload": "randwrite", 00:22:07.800 "status": "finished", 00:22:07.800 "queue_depth": 128, 00:22:07.800 "io_size": 4096, 00:22:07.800 "runtime": 4.023086, 00:22:07.800 "iops": 6370.482758757829, 00:22:07.800 "mibps": 24.88469827639777, 00:22:07.800 "io_failed": 0, 00:22:07.800 "io_timeout": 0, 00:22:07.800 "avg_latency_us": 20033.829572891507, 00:22:07.800 "min_latency_us": 394.70545454545453, 00:22:07.800 "max_latency_us": 45756.04363636364 00:22:07.800 } 00:22:07.800 ], 00:22:07.800 "core_count": 1 00:22:07.800 } 00:22:07.800 19:06:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:07.800 [2024-11-26 19:06:38.930220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:07.800 Running I/O for 4 seconds... 00:22:10.110 5509.00 IOPS, 21.52 MiB/s [2024-11-26T19:06:42.259Z] 5549.50 IOPS, 21.68 MiB/s [2024-11-26T19:06:43.194Z] 5520.67 IOPS, 21.57 MiB/s [2024-11-26T19:06:43.194Z] 5538.50 IOPS, 21.63 MiB/s 00:22:11.979 Latency(us) 00:22:11.979 [2024-11-26T19:06:43.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.979 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:11.979 Verification LBA range: start 0x0 length 0x1400000 00:22:11.979 ftl0 : 4.01 5550.49 21.68 0.00 0.00 22979.87 394.71 33125.47 00:22:11.979 [2024-11-26T19:06:43.194Z] =================================================================================================================== 00:22:11.979 [2024-11-26T19:06:43.194Z] Total : 5550.49 21.68 0.00 0.00 22979.87 0.00 33125.47 00:22:11.979 [2024-11-26 19:06:42.963916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:11.979 { 00:22:11.979 "results": [ 00:22:11.979 { 00:22:11.979 "job": "ftl0", 00:22:11.979 "core_mask": "0x1", 00:22:11.979 "workload": "verify", 00:22:11.979 "status": "finished", 00:22:11.979 "verify_range": { 00:22:11.979 "start": 0, 00:22:11.979 "length": 20971520 00:22:11.979 }, 00:22:11.979 "queue_depth": 128, 00:22:11.979 "io_size": 4096, 00:22:11.979 "runtime": 4.014242, 00:22:11.979 "iops": 5550.487489294367, 00:22:11.979 "mibps": 21.68159175505612, 00:22:11.979 "io_failed": 0, 00:22:11.979 "io_timeout": 0, 00:22:11.979 "avg_latency_us": 22979.86709981191, 00:22:11.979 "min_latency_us": 394.70545454545453, 00:22:11.979 "max_latency_us": 33125.46909090909 00:22:11.979 } 00:22:11.979 ], 00:22:11.979 "core_count": 1 00:22:11.979 } 00:22:11.979 19:06:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:12.238 [2024-11-26 19:06:43.282087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.238 [2024-11-26 19:06:43.282192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:12.238 [2024-11-26 19:06:43.282216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:12.238 [2024-11-26 19:06:43.282231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.238 [2024-11-26 19:06:43.282267] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.238 [2024-11-26 19:06:43.285646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.238 [2024-11-26 19:06:43.285690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:12.238 [2024-11-26 19:06:43.285711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:22:12.238 [2024-11-26 19:06:43.285723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.238 [2024-11-26 19:06:43.287153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.238 [2024-11-26 19:06:43.287218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:12.238 [2024-11-26 19:06:43.287241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:22:12.238 [2024-11-26 19:06:43.287258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.483774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.483872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:12.498 [2024-11-26 19:06:43.483903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 196.467 ms 00:22:12.498 [2024-11-26 19:06:43.483917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.490694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.490763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:12.498 [2024-11-26 19:06:43.490785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.705 ms 00:22:12.498 [2024-11-26 19:06:43.490801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.523819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.523927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.498 [2024-11-26 19:06:43.523953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.865 ms 00:22:12.498 [2024-11-26 19:06:43.523966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.543935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.544044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.498 [2024-11-26 19:06:43.544069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.859 ms 00:22:12.498 [2024-11-26 19:06:43.544083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.544382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.544422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.498 [2024-11-26 19:06:43.544458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:22:12.498 [2024-11-26 19:06:43.544481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.578350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.578444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:12.498 [2024-11-26 19:06:43.578468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.812 ms 00:22:12.498 [2024-11-26 19:06:43.578481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.613274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.613368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:12.498 [2024-11-26 19:06:43.613393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.666 ms 00:22:12.498 [2024-11-26 19:06:43.613406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.646619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.646710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:12.498 [2024-11-26 19:06:43.646735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.087 ms 00:22:12.498 [2024-11-26 19:06:43.646748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.679952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.498 [2024-11-26 19:06:43.680039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:12.498 [2024-11-26 19:06:43.680069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.979 ms 00:22:12.498 [2024-11-26 19:06:43.680082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.498 [2024-11-26 19:06:43.680205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:12.498 [2024-11-26 19:06:43.680234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:12.498 [2024-11-26 19:06:43.680721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.680977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.681995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:12.499 [2024-11-26 19:06:43.682783] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:12.499 [2024-11-26 19:06:43.682814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b600cf61-9a0e-484b-a7a9-240c80794600 00:22:12.499 [2024-11-26 19:06:43.682846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:12.499 [2024-11-26 19:06:43.682872] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:12.499 [2024-11-26 19:06:43.682894] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:12.499 [2024-11-26 19:06:43.682920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:12.499 [2024-11-26 19:06:43.682940] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:12.499 [2024-11-26 19:06:43.682966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:12.499 [2024-11-26 19:06:43.682991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:12.499 [2024-11-26 19:06:43.683013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:12.499 [2024-11-26 19:06:43.683025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:12.499 [2024-11-26 19:06:43.683043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.499 [2024-11-26 19:06:43.683062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:12.499 [2024-11-26 19:06:43.683094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:22:12.499 [2024-11-26 19:06:43.683117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.499 [2024-11-26 19:06:43.700686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.500 [2024-11-26 19:06:43.700781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:12.500 [2024-11-26 19:06:43.700807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.369 ms 00:22:12.500 [2024-11-26 19:06:43.700820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.500 [2024-11-26 19:06:43.701414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.500 [2024-11-26 19:06:43.701471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:12.500 [2024-11-26 19:06:43.701495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:22:12.500 [2024-11-26 19:06:43.701508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.750646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.750736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.758 [2024-11-26 19:06:43.750763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.750777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.750875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.750892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.758 [2024-11-26 19:06:43.750907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.750919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.751159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.751225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.758 [2024-11-26 19:06:43.751261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.751284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.751330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.751358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.758 [2024-11-26 19:06:43.751388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.751420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.861506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.861605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.758 [2024-11-26 19:06:43.861633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.861649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.951978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.758 [2024-11-26 19:06:43.952093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.952290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.758 [2024-11-26 19:06:43.952328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.952411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.758 [2024-11-26 19:06:43.952446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.952591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.758 [2024-11-26 19:06:43.952648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.952720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:12.758 [2024-11-26 19:06:43.952754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.952815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.758 [2024-11-26 19:06:43.952849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.952935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.758 [2024-11-26 19:06:43.952952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.758 [2024-11-26 19:06:43.952967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.758 [2024-11-26 19:06:43.952978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.758 [2024-11-26 19:06:43.953136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.005 ms, result 0 00:22:12.758 true 00:22:13.017 19:06:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78071 00:22:13.017 19:06:43 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78071 ']' 00:22:13.017 19:06:43 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78071 00:22:13.017 19:06:43 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:13.017 19:06:43 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.017 19:06:43 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78071 00:22:13.017 killing process with pid 78071 00:22:13.017 Received shutdown signal, test time was about 4.000000 seconds 00:22:13.017 00:22:13.017 Latency(us) 00:22:13.017 [2024-11-26T19:06:44.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.017 [2024-11-26T19:06:44.232Z] =================================================================================================================== 00:22:13.017 [2024-11-26T19:06:44.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.017 19:06:44 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.017 19:06:44 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.017 19:06:44 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78071' 00:22:13.017 19:06:44 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78071 00:22:13.017 19:06:44 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78071 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:14.919 Remove shared memory files 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:14.919 00:22:14.919 real 0m24.109s 00:22:14.919 user 0m28.415s 00:22:14.919 sys 0m1.188s 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.919 19:06:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:14.919 ************************************ 00:22:14.919 END TEST ftl_bdevperf 00:22:14.919 ************************************ 00:22:14.919 19:06:46 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:14.919 19:06:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:14.919 19:06:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.919 19:06:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:14.919 ************************************ 00:22:14.919 START TEST ftl_trim 00:22:14.919 ************************************ 00:22:14.919 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:15.179 * Looking for test storage... 00:22:15.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.179 19:06:46 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.179 --rc genhtml_branch_coverage=1 00:22:15.179 --rc genhtml_function_coverage=1 00:22:15.179 --rc genhtml_legend=1 00:22:15.179 --rc geninfo_all_blocks=1 00:22:15.179 --rc geninfo_unexecuted_blocks=1 00:22:15.179 00:22:15.179 ' 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.179 --rc genhtml_branch_coverage=1 00:22:15.179 --rc genhtml_function_coverage=1 00:22:15.179 --rc genhtml_legend=1 00:22:15.179 --rc geninfo_all_blocks=1 00:22:15.179 --rc geninfo_unexecuted_blocks=1 00:22:15.179 00:22:15.179 ' 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.179 --rc genhtml_branch_coverage=1 00:22:15.179 --rc genhtml_function_coverage=1 00:22:15.179 --rc genhtml_legend=1 00:22:15.179 --rc geninfo_all_blocks=1 00:22:15.179 --rc geninfo_unexecuted_blocks=1 00:22:15.179 00:22:15.179 ' 00:22:15.179 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.179 --rc genhtml_branch_coverage=1 00:22:15.179 --rc genhtml_function_coverage=1 00:22:15.179 --rc genhtml_legend=1 00:22:15.179 --rc geninfo_all_blocks=1 00:22:15.179 --rc geninfo_unexecuted_blocks=1 00:22:15.179 00:22:15.179 ' 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:15.179 19:06:46 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78426 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:15.180 19:06:46 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78426 00:22:15.180 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78426 ']' 00:22:15.180 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.180 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.180 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.180 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.180 19:06:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:15.438 [2024-11-26 19:06:46.528418] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:22:15.438 [2024-11-26 19:06:46.528651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78426 ] 00:22:15.696 [2024-11-26 19:06:46.711716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.696 [2024-11-26 19:06:46.879869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.696 [2024-11-26 19:06:46.879917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.696 [2024-11-26 19:06:46.879918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.630 19:06:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.630 19:06:47 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:16.630 19:06:47 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:16.630 19:06:47 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:16.630 19:06:47 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:16.630 19:06:47 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:16.630 19:06:47 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:16.630 19:06:47 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:17.196 19:06:48 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:17.196 19:06:48 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:17.196 19:06:48 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:17.196 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:17.196 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:17.196 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:17.196 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:17.196 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:17.454 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:17.454 { 00:22:17.454 "name": "nvme0n1", 00:22:17.454 "aliases": [ 00:22:17.454 "5f5faa69-e730-46b3-95ed-5c59bd058ff2" 00:22:17.454 ], 00:22:17.454 "product_name": "NVMe disk", 00:22:17.454 "block_size": 4096, 00:22:17.454 "num_blocks": 1310720, 00:22:17.454 "uuid": "5f5faa69-e730-46b3-95ed-5c59bd058ff2", 00:22:17.454 "numa_id": -1, 00:22:17.454 "assigned_rate_limits": { 00:22:17.454 "rw_ios_per_sec": 0, 00:22:17.454 "rw_mbytes_per_sec": 0, 00:22:17.454 "r_mbytes_per_sec": 0, 00:22:17.454 "w_mbytes_per_sec": 0 00:22:17.454 }, 00:22:17.454 "claimed": true, 00:22:17.454 "claim_type": "read_many_write_one", 00:22:17.454 "zoned": false, 00:22:17.454 "supported_io_types": { 00:22:17.454 "read": true, 00:22:17.454 "write": true, 00:22:17.454 "unmap": true, 00:22:17.454 "flush": true, 00:22:17.454 "reset": true, 00:22:17.454 "nvme_admin": true, 00:22:17.454 "nvme_io": true, 00:22:17.454 "nvme_io_md": false, 00:22:17.454 "write_zeroes": true, 00:22:17.454 "zcopy": false, 00:22:17.454 "get_zone_info": false, 00:22:17.454 "zone_management": false, 00:22:17.454 "zone_append": false, 00:22:17.454 "compare": true, 00:22:17.454 "compare_and_write": false, 00:22:17.454 "abort": true, 00:22:17.454 "seek_hole": false, 00:22:17.454 "seek_data": false, 00:22:17.454 "copy": true, 00:22:17.454 "nvme_iov_md": false 00:22:17.454 }, 00:22:17.454 "driver_specific": { 00:22:17.454 "nvme": [ 00:22:17.454 { 00:22:17.454 "pci_address": "0000:00:11.0", 00:22:17.454 "trid": { 00:22:17.454 "trtype": "PCIe", 00:22:17.454 "traddr": "0000:00:11.0" 00:22:17.454 }, 00:22:17.454 "ctrlr_data": { 00:22:17.454 "cntlid": 0, 00:22:17.454 "vendor_id": "0x1b36", 00:22:17.454 "model_number": "QEMU NVMe Ctrl", 00:22:17.454 "serial_number": "12341", 00:22:17.454 "firmware_revision": "8.0.0", 00:22:17.454 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:17.454 "oacs": { 00:22:17.454 "security": 0, 00:22:17.454 "format": 1, 00:22:17.454 "firmware": 0, 00:22:17.454 "ns_manage": 1 00:22:17.454 }, 00:22:17.454 "multi_ctrlr": false, 00:22:17.454 "ana_reporting": false 00:22:17.455 }, 00:22:17.455 "vs": { 00:22:17.455 "nvme_version": "1.4" 00:22:17.455 }, 00:22:17.455 "ns_data": { 00:22:17.455 "id": 1, 00:22:17.455 "can_share": false 00:22:17.455 } 00:22:17.455 } 00:22:17.455 ], 00:22:17.455 "mp_policy": "active_passive" 00:22:17.455 } 00:22:17.455 } 00:22:17.455 ]' 00:22:17.455 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:17.712 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:17.712 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:17.712 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:17.712 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:17.712 19:06:48 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:22:17.712 19:06:48 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:17.712 19:06:48 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:17.712 19:06:48 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:17.712 19:06:48 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:17.712 19:06:48 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:17.970 19:06:49 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f4468de6-e46e-4788-9e96-1318eda70703 00:22:17.970 19:06:49 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:17.970 19:06:49 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4468de6-e46e-4788-9e96-1318eda70703 00:22:18.588 19:06:49 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:18.846 19:06:49 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=6c971fc8-cd1b-48b9-8e41-2039a9e3f39c 00:22:18.846 19:06:49 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6c971fc8-cd1b-48b9-8e41-2039a9e3f39c 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:19.105 19:06:50 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:19.105 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:19.105 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:19.105 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:19.105 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:19.105 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:19.670 { 00:22:19.670 "name": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:19.670 "aliases": [ 00:22:19.670 "lvs/nvme0n1p0" 00:22:19.670 ], 00:22:19.670 "product_name": "Logical Volume", 00:22:19.670 "block_size": 4096, 00:22:19.670 "num_blocks": 26476544, 00:22:19.670 "uuid": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:19.670 "assigned_rate_limits": { 00:22:19.670 "rw_ios_per_sec": 0, 00:22:19.670 "rw_mbytes_per_sec": 0, 00:22:19.670 "r_mbytes_per_sec": 0, 00:22:19.670 "w_mbytes_per_sec": 0 00:22:19.670 }, 00:22:19.670 "claimed": false, 00:22:19.670 "zoned": false, 00:22:19.670 "supported_io_types": { 00:22:19.670 "read": true, 00:22:19.670 "write": true, 00:22:19.670 "unmap": true, 00:22:19.670 "flush": false, 00:22:19.670 "reset": true, 00:22:19.670 "nvme_admin": false, 00:22:19.670 "nvme_io": false, 00:22:19.670 "nvme_io_md": false, 00:22:19.670 "write_zeroes": true, 00:22:19.670 "zcopy": false, 00:22:19.670 "get_zone_info": false, 00:22:19.670 "zone_management": false, 00:22:19.670 "zone_append": false, 00:22:19.670 "compare": false, 00:22:19.670 "compare_and_write": false, 00:22:19.670 "abort": false, 00:22:19.670 "seek_hole": true, 00:22:19.670 "seek_data": true, 00:22:19.670 "copy": false, 00:22:19.670 "nvme_iov_md": false 00:22:19.670 }, 00:22:19.670 "driver_specific": { 00:22:19.670 "lvol": { 00:22:19.670 "lvol_store_uuid": "6c971fc8-cd1b-48b9-8e41-2039a9e3f39c", 00:22:19.670 "base_bdev": "nvme0n1", 00:22:19.670 "thin_provision": true, 00:22:19.670 "num_allocated_clusters": 0, 00:22:19.670 "snapshot": false, 00:22:19.670 "clone": false, 00:22:19.670 "esnap_clone": false 00:22:19.670 } 00:22:19.670 } 00:22:19.670 } 00:22:19.670 ]' 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:19.670 19:06:50 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:19.670 19:06:50 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:19.670 19:06:50 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:19.670 19:06:50 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:20.235 19:06:51 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:20.235 19:06:51 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:20.235 19:06:51 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:20.235 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:20.235 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:20.235 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:20.235 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:20.235 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:20.493 { 00:22:20.493 "name": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:20.493 "aliases": [ 00:22:20.493 "lvs/nvme0n1p0" 00:22:20.493 ], 00:22:20.493 "product_name": "Logical Volume", 00:22:20.493 "block_size": 4096, 00:22:20.493 "num_blocks": 26476544, 00:22:20.493 "uuid": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:20.493 "assigned_rate_limits": { 00:22:20.493 "rw_ios_per_sec": 0, 00:22:20.493 "rw_mbytes_per_sec": 0, 00:22:20.493 "r_mbytes_per_sec": 0, 00:22:20.493 "w_mbytes_per_sec": 0 00:22:20.493 }, 00:22:20.493 "claimed": false, 00:22:20.493 "zoned": false, 00:22:20.493 "supported_io_types": { 00:22:20.493 "read": true, 00:22:20.493 "write": true, 00:22:20.493 "unmap": true, 00:22:20.493 "flush": false, 00:22:20.493 "reset": true, 00:22:20.493 "nvme_admin": false, 00:22:20.493 "nvme_io": false, 00:22:20.493 "nvme_io_md": false, 00:22:20.493 "write_zeroes": true, 00:22:20.493 "zcopy": false, 00:22:20.493 "get_zone_info": false, 00:22:20.493 "zone_management": false, 00:22:20.493 "zone_append": false, 00:22:20.493 "compare": false, 00:22:20.493 "compare_and_write": false, 00:22:20.493 "abort": false, 00:22:20.493 "seek_hole": true, 00:22:20.493 "seek_data": true, 00:22:20.493 "copy": false, 00:22:20.493 "nvme_iov_md": false 00:22:20.493 }, 00:22:20.493 "driver_specific": { 00:22:20.493 "lvol": { 00:22:20.493 "lvol_store_uuid": "6c971fc8-cd1b-48b9-8e41-2039a9e3f39c", 00:22:20.493 "base_bdev": "nvme0n1", 00:22:20.493 "thin_provision": true, 00:22:20.493 "num_allocated_clusters": 0, 00:22:20.493 "snapshot": false, 00:22:20.493 "clone": false, 00:22:20.493 "esnap_clone": false 00:22:20.493 } 00:22:20.493 } 00:22:20.493 } 00:22:20.493 ]' 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:20.493 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:20.493 19:06:51 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:20.493 19:06:51 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:20.751 19:06:51 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:20.751 19:06:51 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:20.751 19:06:51 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:20.751 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:20.752 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:20.752 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:20.752 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:20.752 19:06:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e0acbb2-3758-4a28-a561-49cdc864a844 00:22:21.009 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:21.009 { 00:22:21.009 "name": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:21.009 "aliases": [ 00:22:21.009 "lvs/nvme0n1p0" 00:22:21.009 ], 00:22:21.009 "product_name": "Logical Volume", 00:22:21.009 "block_size": 4096, 00:22:21.009 "num_blocks": 26476544, 00:22:21.009 "uuid": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:21.009 "assigned_rate_limits": { 00:22:21.009 "rw_ios_per_sec": 0, 00:22:21.009 "rw_mbytes_per_sec": 0, 00:22:21.009 "r_mbytes_per_sec": 0, 00:22:21.009 "w_mbytes_per_sec": 0 00:22:21.009 }, 00:22:21.009 "claimed": false, 00:22:21.009 "zoned": false, 00:22:21.009 "supported_io_types": { 00:22:21.009 "read": true, 00:22:21.009 "write": true, 00:22:21.009 "unmap": true, 00:22:21.009 "flush": false, 00:22:21.009 "reset": true, 00:22:21.009 "nvme_admin": false, 00:22:21.009 "nvme_io": false, 00:22:21.009 "nvme_io_md": false, 00:22:21.009 "write_zeroes": true, 00:22:21.009 "zcopy": false, 00:22:21.009 "get_zone_info": false, 00:22:21.009 "zone_management": false, 00:22:21.009 "zone_append": false, 00:22:21.009 "compare": false, 00:22:21.009 "compare_and_write": false, 00:22:21.009 "abort": false, 00:22:21.009 "seek_hole": true, 00:22:21.009 "seek_data": true, 00:22:21.009 "copy": false, 00:22:21.009 "nvme_iov_md": false 00:22:21.009 }, 00:22:21.009 "driver_specific": { 00:22:21.009 "lvol": { 00:22:21.009 "lvol_store_uuid": "6c971fc8-cd1b-48b9-8e41-2039a9e3f39c", 00:22:21.009 "base_bdev": "nvme0n1", 00:22:21.009 "thin_provision": true, 00:22:21.009 "num_allocated_clusters": 0, 00:22:21.009 "snapshot": false, 00:22:21.009 "clone": false, 00:22:21.009 "esnap_clone": false 00:22:21.009 } 00:22:21.009 } 00:22:21.009 } 00:22:21.009 ]' 00:22:21.009 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:21.267 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:21.267 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:21.267 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:21.267 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:21.267 19:06:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:21.267 19:06:52 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:21.267 19:06:52 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9e0acbb2-3758-4a28-a561-49cdc864a844 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:21.835 [2024-11-26 19:06:52.776501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.776580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.835 [2024-11-26 19:06:52.776610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:21.835 [2024-11-26 19:06:52.776624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.780656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.780742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.835 [2024-11-26 19:06:52.780782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:22:21.835 [2024-11-26 19:06:52.780809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.781261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.835 [2024-11-26 19:06:52.782284] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.835 [2024-11-26 19:06:52.782336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.782352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.835 [2024-11-26 19:06:52.782368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:22:21.835 [2024-11-26 19:06:52.782381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.782568] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID daff057d-85bc-40d2-a74e-43f65c8e8de4 00:22:21.835 [2024-11-26 19:06:52.783821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.783874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:21.835 [2024-11-26 19:06:52.783893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:21.835 [2024-11-26 19:06:52.783909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.789058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.789149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.835 [2024-11-26 19:06:52.789185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:22:21.835 [2024-11-26 19:06:52.789209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.789482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.789516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.835 [2024-11-26 19:06:52.789533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:22:21.835 [2024-11-26 19:06:52.789553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.789606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.789626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.835 [2024-11-26 19:06:52.789641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:21.835 [2024-11-26 19:06:52.789659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.789707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:21.835 [2024-11-26 19:06:52.794366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.794420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.835 [2024-11-26 19:06:52.794444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.668 ms 00:22:21.835 [2024-11-26 19:06:52.794458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.794583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.794635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.835 [2024-11-26 19:06:52.794655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:21.835 [2024-11-26 19:06:52.794668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.794714] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:21.835 [2024-11-26 19:06:52.794878] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:21.835 [2024-11-26 19:06:52.794914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.835 [2024-11-26 19:06:52.794933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:21.835 [2024-11-26 19:06:52.794952] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.835 [2024-11-26 19:06:52.794968] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.835 [2024-11-26 19:06:52.794983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:21.835 [2024-11-26 19:06:52.794996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.835 [2024-11-26 19:06:52.795012] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:21.835 [2024-11-26 19:06:52.795025] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:21.835 [2024-11-26 19:06:52.795039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.795052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.835 [2024-11-26 19:06:52.795067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:21.835 [2024-11-26 19:06:52.795080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.795212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.835 [2024-11-26 19:06:52.795236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.835 [2024-11-26 19:06:52.795253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:21.835 [2024-11-26 19:06:52.795266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.835 [2024-11-26 19:06:52.795410] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.835 [2024-11-26 19:06:52.795429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.835 [2024-11-26 19:06:52.795446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.835 [2024-11-26 19:06:52.795485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.835 [2024-11-26 19:06:52.795541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.835 [2024-11-26 19:06:52.795566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.835 [2024-11-26 19:06:52.795578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:21.835 [2024-11-26 19:06:52.795591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.835 [2024-11-26 19:06:52.795602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.835 [2024-11-26 19:06:52.795616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:21.835 [2024-11-26 19:06:52.795628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.835 [2024-11-26 19:06:52.795656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.835 [2024-11-26 19:06:52.795697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.835 [2024-11-26 19:06:52.795734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.835 [2024-11-26 19:06:52.795775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.835 [2024-11-26 19:06:52.795812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:21.835 [2024-11-26 19:06:52.795825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.835 [2024-11-26 19:06:52.795836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.835 [2024-11-26 19:06:52.795852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:21.836 [2024-11-26 19:06:52.795864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.836 [2024-11-26 19:06:52.795878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.836 [2024-11-26 19:06:52.795890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:21.836 [2024-11-26 19:06:52.795903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.836 [2024-11-26 19:06:52.795914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:21.836 [2024-11-26 19:06:52.795928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:21.836 [2024-11-26 19:06:52.795939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.836 [2024-11-26 19:06:52.795952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:21.836 [2024-11-26 19:06:52.795965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:21.836 [2024-11-26 19:06:52.795978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.836 [2024-11-26 19:06:52.795989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.836 [2024-11-26 19:06:52.796004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.836 [2024-11-26 19:06:52.796017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.836 [2024-11-26 19:06:52.796031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.836 [2024-11-26 19:06:52.796043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.836 [2024-11-26 19:06:52.796062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.836 [2024-11-26 19:06:52.796074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.836 [2024-11-26 19:06:52.796088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.836 [2024-11-26 19:06:52.796099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.836 [2024-11-26 19:06:52.796113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.836 [2024-11-26 19:06:52.796130] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.836 [2024-11-26 19:06:52.796149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:21.836 [2024-11-26 19:06:52.796214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:21.836 [2024-11-26 19:06:52.796227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:21.836 [2024-11-26 19:06:52.796255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:21.836 [2024-11-26 19:06:52.796271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:21.836 [2024-11-26 19:06:52.796290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:21.836 [2024-11-26 19:06:52.796303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:21.836 [2024-11-26 19:06:52.796322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:21.836 [2024-11-26 19:06:52.796335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:21.836 [2024-11-26 19:06:52.796357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:21.836 [2024-11-26 19:06:52.796437] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.836 [2024-11-26 19:06:52.796458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.836 [2024-11-26 19:06:52.796488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.836 [2024-11-26 19:06:52.796500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.836 [2024-11-26 19:06:52.796515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.836 [2024-11-26 19:06:52.796530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.836 [2024-11-26 19:06:52.796544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.836 [2024-11-26 19:06:52.796558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:22:21.836 [2024-11-26 19:06:52.796572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.836 [2024-11-26 19:06:52.796667] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:21.836 [2024-11-26 19:06:52.796697] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:23.734 [2024-11-26 19:06:54.923208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.734 [2024-11-26 19:06:54.923307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:23.734 [2024-11-26 19:06:54.923332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2126.548 ms 00:22:23.734 [2024-11-26 19:06:54.923350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:54.958131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:54.958232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.993 [2024-11-26 19:06:54.958257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.368 ms 00:22:23.993 [2024-11-26 19:06:54.958274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:54.958484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:54.958511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:23.993 [2024-11-26 19:06:54.958554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:23.993 [2024-11-26 19:06:54.958579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.014484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.014571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.993 [2024-11-26 19:06:55.014593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.859 ms 00:22:23.993 [2024-11-26 19:06:55.014615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.014881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.014924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.993 [2024-11-26 19:06:55.014942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:23.993 [2024-11-26 19:06:55.014957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.015471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.015549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.993 [2024-11-26 19:06:55.015581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:22:23.993 [2024-11-26 19:06:55.015611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.015850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.015894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.993 [2024-11-26 19:06:55.015932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:22:23.993 [2024-11-26 19:06:55.015951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.035618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.035697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.993 [2024-11-26 19:06:55.035720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.618 ms 00:22:23.993 [2024-11-26 19:06:55.035736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.049913] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:23.993 [2024-11-26 19:06:55.064783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.064877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:23.993 [2024-11-26 19:06:55.064903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.840 ms 00:22:23.993 [2024-11-26 19:06:55.064918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.129799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.129888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:23.993 [2024-11-26 19:06:55.129915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.708 ms 00:22:23.993 [2024-11-26 19:06:55.129929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.130304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.130337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:23.993 [2024-11-26 19:06:55.130361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:22:23.993 [2024-11-26 19:06:55.130374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.163960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.164085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:23.993 [2024-11-26 19:06:55.164127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.512 ms 00:22:23.993 [2024-11-26 19:06:55.164158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.197394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.197482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:23.993 [2024-11-26 19:06:55.197510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.907 ms 00:22:23.993 [2024-11-26 19:06:55.197524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.993 [2024-11-26 19:06:55.198417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.993 [2024-11-26 19:06:55.198455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:23.993 [2024-11-26 19:06:55.198475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:22:23.993 [2024-11-26 19:06:55.198489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.301769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.252 [2024-11-26 19:06:55.301869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:24.252 [2024-11-26 19:06:55.301914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.217 ms 00:22:24.252 [2024-11-26 19:06:55.301940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.337621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.252 [2024-11-26 19:06:55.337722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:24.252 [2024-11-26 19:06:55.337749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.376 ms 00:22:24.252 [2024-11-26 19:06:55.337764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.372091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.252 [2024-11-26 19:06:55.372190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:24.252 [2024-11-26 19:06:55.372217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.100 ms 00:22:24.252 [2024-11-26 19:06:55.372231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.408074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.252 [2024-11-26 19:06:55.408261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:24.252 [2024-11-26 19:06:55.408305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.632 ms 00:22:24.252 [2024-11-26 19:06:55.408340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.408575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.252 [2024-11-26 19:06:55.408600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:24.252 [2024-11-26 19:06:55.408625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:24.252 [2024-11-26 19:06:55.408639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.408760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.252 [2024-11-26 19:06:55.408805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:24.252 [2024-11-26 19:06:55.408841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:24.252 [2024-11-26 19:06:55.408865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.252 [2024-11-26 19:06:55.410555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.252 [2024-11-26 19:06:55.415841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2633.426 ms, result 0 00:22:24.252 [2024-11-26 19:06:55.416880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.252 { 00:22:24.252 "name": "ftl0", 00:22:24.252 "uuid": "daff057d-85bc-40d2-a74e-43f65c8e8de4" 00:22:24.252 } 00:22:24.252 19:06:55 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:24.252 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:24.252 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:24.252 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:24.252 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:24.252 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:24.252 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:24.838 19:06:55 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:25.097 [ 00:22:25.097 { 00:22:25.097 "name": "ftl0", 00:22:25.097 "aliases": [ 00:22:25.097 "daff057d-85bc-40d2-a74e-43f65c8e8de4" 00:22:25.097 ], 00:22:25.097 "product_name": "FTL disk", 00:22:25.097 "block_size": 4096, 00:22:25.097 "num_blocks": 23592960, 00:22:25.097 "uuid": "daff057d-85bc-40d2-a74e-43f65c8e8de4", 00:22:25.097 "assigned_rate_limits": { 00:22:25.097 "rw_ios_per_sec": 0, 00:22:25.097 "rw_mbytes_per_sec": 0, 00:22:25.097 "r_mbytes_per_sec": 0, 00:22:25.097 "w_mbytes_per_sec": 0 00:22:25.097 }, 00:22:25.097 "claimed": false, 00:22:25.097 "zoned": false, 00:22:25.097 "supported_io_types": { 00:22:25.097 "read": true, 00:22:25.097 "write": true, 00:22:25.097 "unmap": true, 00:22:25.097 "flush": true, 00:22:25.097 "reset": false, 00:22:25.097 "nvme_admin": false, 00:22:25.097 "nvme_io": false, 00:22:25.097 "nvme_io_md": false, 00:22:25.097 "write_zeroes": true, 00:22:25.097 "zcopy": false, 00:22:25.097 "get_zone_info": false, 00:22:25.097 "zone_management": false, 00:22:25.097 "zone_append": false, 00:22:25.097 "compare": false, 00:22:25.097 "compare_and_write": false, 00:22:25.097 "abort": false, 00:22:25.097 "seek_hole": false, 00:22:25.097 "seek_data": false, 00:22:25.097 "copy": false, 00:22:25.097 "nvme_iov_md": false 00:22:25.097 }, 00:22:25.097 "driver_specific": { 00:22:25.097 "ftl": { 00:22:25.097 "base_bdev": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:25.097 "cache": "nvc0n1p0" 00:22:25.097 } 00:22:25.097 } 00:22:25.097 } 00:22:25.097 ] 00:22:25.097 19:06:56 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:25.097 19:06:56 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:25.097 19:06:56 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:25.356 19:06:56 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:25.356 19:06:56 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:25.615 19:06:56 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:25.615 { 00:22:25.615 "name": "ftl0", 00:22:25.615 "aliases": [ 00:22:25.615 "daff057d-85bc-40d2-a74e-43f65c8e8de4" 00:22:25.615 ], 00:22:25.615 "product_name": "FTL disk", 00:22:25.615 "block_size": 4096, 00:22:25.615 "num_blocks": 23592960, 00:22:25.615 "uuid": "daff057d-85bc-40d2-a74e-43f65c8e8de4", 00:22:25.615 "assigned_rate_limits": { 00:22:25.615 "rw_ios_per_sec": 0, 00:22:25.615 "rw_mbytes_per_sec": 0, 00:22:25.615 "r_mbytes_per_sec": 0, 00:22:25.615 "w_mbytes_per_sec": 0 00:22:25.615 }, 00:22:25.615 "claimed": false, 00:22:25.615 "zoned": false, 00:22:25.615 "supported_io_types": { 00:22:25.615 "read": true, 00:22:25.615 "write": true, 00:22:25.615 "unmap": true, 00:22:25.615 "flush": true, 00:22:25.615 "reset": false, 00:22:25.615 "nvme_admin": false, 00:22:25.615 "nvme_io": false, 00:22:25.615 "nvme_io_md": false, 00:22:25.615 "write_zeroes": true, 00:22:25.615 "zcopy": false, 00:22:25.615 "get_zone_info": false, 00:22:25.615 "zone_management": false, 00:22:25.615 "zone_append": false, 00:22:25.615 "compare": false, 00:22:25.615 "compare_and_write": false, 00:22:25.615 "abort": false, 00:22:25.615 "seek_hole": false, 00:22:25.615 "seek_data": false, 00:22:25.615 "copy": false, 00:22:25.615 "nvme_iov_md": false 00:22:25.615 }, 00:22:25.615 "driver_specific": { 00:22:25.615 "ftl": { 00:22:25.615 "base_bdev": "9e0acbb2-3758-4a28-a561-49cdc864a844", 00:22:25.615 "cache": "nvc0n1p0" 00:22:25.615 } 00:22:25.615 } 00:22:25.615 } 00:22:25.615 ]' 00:22:25.615 19:06:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:25.872 19:06:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:25.872 19:06:56 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:26.131 [2024-11-26 19:06:57.203008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.131 [2024-11-26 19:06:57.203090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:26.131 [2024-11-26 19:06:57.203115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:26.131 [2024-11-26 19:06:57.203130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.131 [2024-11-26 19:06:57.203195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:26.131 [2024-11-26 19:06:57.206601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.131 [2024-11-26 19:06:57.206651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:26.131 [2024-11-26 19:06:57.206684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:22:26.131 [2024-11-26 19:06:57.206707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.131 [2024-11-26 19:06:57.207315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.131 [2024-11-26 19:06:57.207350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:26.131 [2024-11-26 19:06:57.207369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:22:26.131 [2024-11-26 19:06:57.207382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.131 [2024-11-26 19:06:57.211166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.131 [2024-11-26 19:06:57.211227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:26.131 [2024-11-26 19:06:57.211248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:22:26.131 [2024-11-26 19:06:57.211261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.131 [2024-11-26 19:06:57.219148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.131 [2024-11-26 19:06:57.219262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:26.132 [2024-11-26 19:06:57.219286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.789 ms 00:22:26.132 [2024-11-26 19:06:57.219300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.132 [2024-11-26 19:06:57.252729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.132 [2024-11-26 19:06:57.252819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:26.132 [2024-11-26 19:06:57.252848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.225 ms 00:22:26.132 [2024-11-26 19:06:57.252862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.132 [2024-11-26 19:06:57.273047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.132 [2024-11-26 19:06:57.273141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:26.132 [2024-11-26 19:06:57.273183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.947 ms 00:22:26.132 [2024-11-26 19:06:57.273202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.132 [2024-11-26 19:06:57.273585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.132 [2024-11-26 19:06:57.273633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:26.132 [2024-11-26 19:06:57.273677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:22:26.132 [2024-11-26 19:06:57.273701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.132 [2024-11-26 19:06:57.307081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.132 [2024-11-26 19:06:57.307183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:26.132 [2024-11-26 19:06:57.307209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.299 ms 00:22:26.132 [2024-11-26 19:06:57.307223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.132 [2024-11-26 19:06:57.341100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.132 [2024-11-26 19:06:57.341189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:26.132 [2024-11-26 19:06:57.341229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.677 ms 00:22:26.132 [2024-11-26 19:06:57.341252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.391 [2024-11-26 19:06:57.374882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.391 [2024-11-26 19:06:57.374963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:26.391 [2024-11-26 19:06:57.374988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.360 ms 00:22:26.391 [2024-11-26 19:06:57.375002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.391 [2024-11-26 19:06:57.407518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.391 [2024-11-26 19:06:57.407601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:26.391 [2024-11-26 19:06:57.407625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.225 ms 00:22:26.391 [2024-11-26 19:06:57.407639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.391 [2024-11-26 19:06:57.407818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:26.391 [2024-11-26 19:06:57.407866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.407897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.407915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.407934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.407951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.407983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.408015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.408043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.408065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.408090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.408106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:26.391 [2024-11-26 19:06:57.408123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.408988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.409980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:26.392 [2024-11-26 19:06:57.410596] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:26.393 [2024-11-26 19:06:57.410633] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:22:26.393 [2024-11-26 19:06:57.410658] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:26.393 [2024-11-26 19:06:57.410684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:26.393 [2024-11-26 19:06:57.410709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:26.393 [2024-11-26 19:06:57.410744] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:26.393 [2024-11-26 19:06:57.410774] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:26.393 [2024-11-26 19:06:57.410807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:26.393 [2024-11-26 19:06:57.410832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:26.393 [2024-11-26 19:06:57.410854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:26.393 [2024-11-26 19:06:57.410869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:26.393 [2024-11-26 19:06:57.410895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.393 [2024-11-26 19:06:57.410920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:26.393 [2024-11-26 19:06:57.410959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:22:26.393 [2024-11-26 19:06:57.410987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.428261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.393 [2024-11-26 19:06:57.428333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:26.393 [2024-11-26 19:06:57.428361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.200 ms 00:22:26.393 [2024-11-26 19:06:57.428374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.429056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.393 [2024-11-26 19:06:57.429097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:26.393 [2024-11-26 19:06:57.429117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:22:26.393 [2024-11-26 19:06:57.429130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.489095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.393 [2024-11-26 19:06:57.489200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.393 [2024-11-26 19:06:57.489228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.393 [2024-11-26 19:06:57.489241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.489432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.393 [2024-11-26 19:06:57.489454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.393 [2024-11-26 19:06:57.489480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.393 [2024-11-26 19:06:57.489502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.489640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.393 [2024-11-26 19:06:57.489685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.393 [2024-11-26 19:06:57.489720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.393 [2024-11-26 19:06:57.489746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.489811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.393 [2024-11-26 19:06:57.489838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.393 [2024-11-26 19:06:57.489864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.393 [2024-11-26 19:06:57.489887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.393 [2024-11-26 19:06:57.602906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.393 [2024-11-26 19:06:57.602982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.393 [2024-11-26 19:06:57.603006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.393 [2024-11-26 19:06:57.603019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.689672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.689751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.651 [2024-11-26 19:06:57.689776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.689790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.689952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.689972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:26.651 [2024-11-26 19:06:57.689998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.690010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.690076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.690092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:26.651 [2024-11-26 19:06:57.690106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.690119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.690308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.690329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:26.651 [2024-11-26 19:06:57.690346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.690361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.690441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.690462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:26.651 [2024-11-26 19:06:57.690477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.690489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.690559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.690575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:26.651 [2024-11-26 19:06:57.690593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.690608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.690682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.651 [2024-11-26 19:06:57.690702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:26.651 [2024-11-26 19:06:57.690717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.651 [2024-11-26 19:06:57.690730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.651 [2024-11-26 19:06:57.690952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.928 ms, result 0 00:22:26.651 true 00:22:26.651 19:06:57 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78426 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78426 ']' 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78426 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78426 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.651 killing process with pid 78426 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78426' 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78426 00:22:26.651 19:06:57 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78426 00:22:31.918 19:07:02 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:32.856 65536+0 records in 00:22:32.856 65536+0 records out 00:22:32.856 268435456 bytes (268 MB, 256 MiB) copied, 1.41765 s, 189 MB/s 00:22:32.856 19:07:03 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:32.856 [2024-11-26 19:07:03.967145] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:22:32.856 [2024-11-26 19:07:03.967321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78642 ] 00:22:33.116 [2024-11-26 19:07:04.141563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.116 [2024-11-26 19:07:04.294253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.684 [2024-11-26 19:07:04.625241] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:33.684 [2024-11-26 19:07:04.625341] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:33.684 [2024-11-26 19:07:04.790316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.684 [2024-11-26 19:07:04.790634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:33.684 [2024-11-26 19:07:04.790668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:33.684 [2024-11-26 19:07:04.790681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.684 [2024-11-26 19:07:04.794318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.684 [2024-11-26 19:07:04.794371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:33.684 [2024-11-26 19:07:04.794390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:22:33.684 [2024-11-26 19:07:04.794402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.794574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:33.685 [2024-11-26 19:07:04.795566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:33.685 [2024-11-26 19:07:04.795617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.795633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:33.685 [2024-11-26 19:07:04.795645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:22:33.685 [2024-11-26 19:07:04.795657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.796886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:33.685 [2024-11-26 19:07:04.814647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.814750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:33.685 [2024-11-26 19:07:04.814795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.754 ms 00:22:33.685 [2024-11-26 19:07:04.814825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.815075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.815099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:33.685 [2024-11-26 19:07:04.815114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:33.685 [2024-11-26 19:07:04.815126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.820263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.820339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:33.685 [2024-11-26 19:07:04.820361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:22:33.685 [2024-11-26 19:07:04.820373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.820558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.820583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:33.685 [2024-11-26 19:07:04.820605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:33.685 [2024-11-26 19:07:04.820616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.820664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.820682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:33.685 [2024-11-26 19:07:04.820695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:33.685 [2024-11-26 19:07:04.820706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.820740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:33.685 [2024-11-26 19:07:04.825186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.825418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:33.685 [2024-11-26 19:07:04.825454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:22:33.685 [2024-11-26 19:07:04.825469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.825617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.825639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:33.685 [2024-11-26 19:07:04.825654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:33.685 [2024-11-26 19:07:04.825665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.825708] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:33.685 [2024-11-26 19:07:04.825740] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:33.685 [2024-11-26 19:07:04.825785] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:33.685 [2024-11-26 19:07:04.825805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:33.685 [2024-11-26 19:07:04.825921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:33.685 [2024-11-26 19:07:04.825938] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:33.685 [2024-11-26 19:07:04.825953] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:33.685 [2024-11-26 19:07:04.825973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:33.685 [2024-11-26 19:07:04.825987] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826000] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:33.685 [2024-11-26 19:07:04.826011] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:33.685 [2024-11-26 19:07:04.826022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:33.685 [2024-11-26 19:07:04.826033] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:33.685 [2024-11-26 19:07:04.826046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.826057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:33.685 [2024-11-26 19:07:04.826069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:22:33.685 [2024-11-26 19:07:04.826081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.826208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.685 [2024-11-26 19:07:04.826233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:33.685 [2024-11-26 19:07:04.826246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:33.685 [2024-11-26 19:07:04.826258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.685 [2024-11-26 19:07:04.826382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:33.685 [2024-11-26 19:07:04.826401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:33.685 [2024-11-26 19:07:04.826414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:33.685 [2024-11-26 19:07:04.826449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:33.685 [2024-11-26 19:07:04.826482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:33.685 [2024-11-26 19:07:04.826503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:33.685 [2024-11-26 19:07:04.826528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:33.685 [2024-11-26 19:07:04.826539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:33.685 [2024-11-26 19:07:04.826550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:33.685 [2024-11-26 19:07:04.826561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:33.685 [2024-11-26 19:07:04.826571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:33.685 [2024-11-26 19:07:04.826593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:33.685 [2024-11-26 19:07:04.826624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:33.685 [2024-11-26 19:07:04.826655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:33.685 [2024-11-26 19:07:04.826693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:33.685 [2024-11-26 19:07:04.826725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:33.685 [2024-11-26 19:07:04.826746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:33.685 [2024-11-26 19:07:04.826757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:33.685 [2024-11-26 19:07:04.826777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:33.685 [2024-11-26 19:07:04.826788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:33.685 [2024-11-26 19:07:04.826799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:33.685 [2024-11-26 19:07:04.826809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:33.685 [2024-11-26 19:07:04.826820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:33.685 [2024-11-26 19:07:04.826830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:33.685 [2024-11-26 19:07:04.826851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:33.685 [2024-11-26 19:07:04.826861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.685 [2024-11-26 19:07:04.826873] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:33.686 [2024-11-26 19:07:04.826884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:33.686 [2024-11-26 19:07:04.826900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:33.686 [2024-11-26 19:07:04.826911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:33.686 [2024-11-26 19:07:04.826923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:33.686 [2024-11-26 19:07:04.826934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:33.686 [2024-11-26 19:07:04.826944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:33.686 [2024-11-26 19:07:04.826955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:33.686 [2024-11-26 19:07:04.826965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:33.686 [2024-11-26 19:07:04.826976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:33.686 [2024-11-26 19:07:04.826988] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:33.686 [2024-11-26 19:07:04.827003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:33.686 [2024-11-26 19:07:04.827027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:33.686 [2024-11-26 19:07:04.827039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:33.686 [2024-11-26 19:07:04.827052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:33.686 [2024-11-26 19:07:04.827064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:33.686 [2024-11-26 19:07:04.827075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:33.686 [2024-11-26 19:07:04.827087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:33.686 [2024-11-26 19:07:04.827098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:33.686 [2024-11-26 19:07:04.827109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:33.686 [2024-11-26 19:07:04.827121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:33.686 [2024-11-26 19:07:04.827196] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:33.686 [2024-11-26 19:07:04.827209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:33.686 [2024-11-26 19:07:04.827235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:33.686 [2024-11-26 19:07:04.827247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:33.686 [2024-11-26 19:07:04.827259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:33.686 [2024-11-26 19:07:04.827281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.686 [2024-11-26 19:07:04.827297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:33.686 [2024-11-26 19:07:04.827309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:22:33.686 [2024-11-26 19:07:04.827320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.686 [2024-11-26 19:07:04.862985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.686 [2024-11-26 19:07:04.863069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:33.686 [2024-11-26 19:07:04.863093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.582 ms 00:22:33.686 [2024-11-26 19:07:04.863119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.686 [2024-11-26 19:07:04.863355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.686 [2024-11-26 19:07:04.863379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:33.686 [2024-11-26 19:07:04.863393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:33.686 [2024-11-26 19:07:04.863405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.920393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.920500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.945 [2024-11-26 19:07:04.920525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.952 ms 00:22:33.945 [2024-11-26 19:07:04.920538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.920744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.920765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.945 [2024-11-26 19:07:04.920779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:33.945 [2024-11-26 19:07:04.920791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.921163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.921211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.945 [2024-11-26 19:07:04.921235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:22:33.945 [2024-11-26 19:07:04.921246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.921427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.921448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.945 [2024-11-26 19:07:04.921460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:33.945 [2024-11-26 19:07:04.921471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.939086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.939160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.945 [2024-11-26 19:07:04.939201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.575 ms 00:22:33.945 [2024-11-26 19:07:04.939215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.956046] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:33.945 [2024-11-26 19:07:04.956146] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:33.945 [2024-11-26 19:07:04.956188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.956205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:33.945 [2024-11-26 19:07:04.956222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.753 ms 00:22:33.945 [2024-11-26 19:07:04.956234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:04.987308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:04.987655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:33.945 [2024-11-26 19:07:04.987691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.879 ms 00:22:33.945 [2024-11-26 19:07:04.987705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:05.004529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:05.004615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:33.945 [2024-11-26 19:07:05.004636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.636 ms 00:22:33.945 [2024-11-26 19:07:05.004650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:05.022327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:05.022649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:33.945 [2024-11-26 19:07:05.022690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.473 ms 00:22:33.945 [2024-11-26 19:07:05.022713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:05.023676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.945 [2024-11-26 19:07:05.023756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:33.945 [2024-11-26 19:07:05.023780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:22:33.945 [2024-11-26 19:07:05.023792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.945 [2024-11-26 19:07:05.102535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.946 [2024-11-26 19:07:05.102634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:33.946 [2024-11-26 19:07:05.102657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.696 ms 00:22:33.946 [2024-11-26 19:07:05.102670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.946 [2024-11-26 19:07:05.116188] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:33.946 [2024-11-26 19:07:05.130934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.946 [2024-11-26 19:07:05.131037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:33.946 [2024-11-26 19:07:05.131061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.053 ms 00:22:33.946 [2024-11-26 19:07:05.131085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.946 [2024-11-26 19:07:05.131308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.946 [2024-11-26 19:07:05.131331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:33.946 [2024-11-26 19:07:05.131345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:33.946 [2024-11-26 19:07:05.131357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.946 [2024-11-26 19:07:05.131429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.946 [2024-11-26 19:07:05.131448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:33.946 [2024-11-26 19:07:05.131460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:33.946 [2024-11-26 19:07:05.131478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.946 [2024-11-26 19:07:05.131550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.946 [2024-11-26 19:07:05.131571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:33.946 [2024-11-26 19:07:05.131583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:33.946 [2024-11-26 19:07:05.131594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.946 [2024-11-26 19:07:05.131684] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:33.946 [2024-11-26 19:07:05.131711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.946 [2024-11-26 19:07:05.131723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:33.946 [2024-11-26 19:07:05.131735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:33.946 [2024-11-26 19:07:05.131747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.204 [2024-11-26 19:07:05.164946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.204 [2024-11-26 19:07:05.165042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:34.204 [2024-11-26 19:07:05.165065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.158 ms 00:22:34.204 [2024-11-26 19:07:05.165077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.204 [2024-11-26 19:07:05.165352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.204 [2024-11-26 19:07:05.165377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:34.204 [2024-11-26 19:07:05.165391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:34.204 [2024-11-26 19:07:05.165402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.204 [2024-11-26 19:07:05.166735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:34.204 [2024-11-26 19:07:05.171424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.882 ms, result 0 00:22:34.204 [2024-11-26 19:07:05.172375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.204 [2024-11-26 19:07:05.189624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:35.138  [2024-11-26T19:07:07.290Z] Copying: 25/256 [MB] (25 MBps) [2024-11-26T19:07:08.232Z] Copying: 51/256 [MB] (26 MBps) [2024-11-26T19:07:09.605Z] Copying: 77/256 [MB] (26 MBps) [2024-11-26T19:07:10.541Z] Copying: 105/256 [MB] (27 MBps) [2024-11-26T19:07:11.476Z] Copying: 131/256 [MB] (25 MBps) [2024-11-26T19:07:12.410Z] Copying: 157/256 [MB] (26 MBps) [2024-11-26T19:07:13.345Z] Copying: 185/256 [MB] (27 MBps) [2024-11-26T19:07:14.279Z] Copying: 209/256 [MB] (24 MBps) [2024-11-26T19:07:15.281Z] Copying: 236/256 [MB] (26 MBps) [2024-11-26T19:07:15.281Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-26 19:07:14.965581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:44.066 [2024-11-26 19:07:14.978466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:14.978765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:44.066 [2024-11-26 19:07:14.978899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:44.066 [2024-11-26 19:07:14.979084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:14.979199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:44.066 [2024-11-26 19:07:14.982791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:14.982996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:44.066 [2024-11-26 19:07:14.983120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.369 ms 00:22:44.066 [2024-11-26 19:07:14.983275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:14.984950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:14.985116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:44.066 [2024-11-26 19:07:14.985268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:22:44.066 [2024-11-26 19:07:14.985396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:14.992508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:14.992781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:44.066 [2024-11-26 19:07:14.992921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:22:44.066 [2024-11-26 19:07:14.992981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.000835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.000914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:44.066 [2024-11-26 19:07:15.000934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.661 ms 00:22:44.066 [2024-11-26 19:07:15.000946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.034221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.034313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:44.066 [2024-11-26 19:07:15.034336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.176 ms 00:22:44.066 [2024-11-26 19:07:15.034348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.053324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.053423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:44.066 [2024-11-26 19:07:15.053461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.804 ms 00:22:44.066 [2024-11-26 19:07:15.053475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.053709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.053733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:44.066 [2024-11-26 19:07:15.053746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:44.066 [2024-11-26 19:07:15.053784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.087307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.087612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:44.066 [2024-11-26 19:07:15.087647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.491 ms 00:22:44.066 [2024-11-26 19:07:15.087661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.120825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.120929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:44.066 [2024-11-26 19:07:15.120951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.041 ms 00:22:44.066 [2024-11-26 19:07:15.120963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.153932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.154028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:44.066 [2024-11-26 19:07:15.154050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.828 ms 00:22:44.066 [2024-11-26 19:07:15.154062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.066 [2024-11-26 19:07:15.186823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.066 [2024-11-26 19:07:15.186921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:44.066 [2024-11-26 19:07:15.186944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.552 ms 00:22:44.066 [2024-11-26 19:07:15.186956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.067 [2024-11-26 19:07:15.187092] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:44.067 [2024-11-26 19:07:15.187121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.187993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:44.067 [2024-11-26 19:07:15.188086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:44.068 [2024-11-26 19:07:15.188382] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:44.068 [2024-11-26 19:07:15.188394] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:22:44.068 [2024-11-26 19:07:15.188405] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:44.068 [2024-11-26 19:07:15.188416] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:44.068 [2024-11-26 19:07:15.188427] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:44.068 [2024-11-26 19:07:15.188438] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:44.068 [2024-11-26 19:07:15.188450] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:44.068 [2024-11-26 19:07:15.188461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:44.068 [2024-11-26 19:07:15.188479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:44.068 [2024-11-26 19:07:15.188489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:44.068 [2024-11-26 19:07:15.188499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:44.068 [2024-11-26 19:07:15.188510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.068 [2024-11-26 19:07:15.188522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:44.068 [2024-11-26 19:07:15.188535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:22:44.068 [2024-11-26 19:07:15.188546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.068 [2024-11-26 19:07:15.205793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.068 [2024-11-26 19:07:15.206102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:44.068 [2024-11-26 19:07:15.206136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.211 ms 00:22:44.068 [2024-11-26 19:07:15.206149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.068 [2024-11-26 19:07:15.206746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.068 [2024-11-26 19:07:15.206789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:44.068 [2024-11-26 19:07:15.206806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:22:44.068 [2024-11-26 19:07:15.206818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.068 [2024-11-26 19:07:15.253948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.068 [2024-11-26 19:07:15.254032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.068 [2024-11-26 19:07:15.254052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.068 [2024-11-26 19:07:15.254071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.068 [2024-11-26 19:07:15.254271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.068 [2024-11-26 19:07:15.254296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.068 [2024-11-26 19:07:15.254309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.068 [2024-11-26 19:07:15.254321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.068 [2024-11-26 19:07:15.254401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.068 [2024-11-26 19:07:15.254421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.068 [2024-11-26 19:07:15.254433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.068 [2024-11-26 19:07:15.254445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.068 [2024-11-26 19:07:15.254477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.068 [2024-11-26 19:07:15.254491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.068 [2024-11-26 19:07:15.254503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.068 [2024-11-26 19:07:15.254514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.326 [2024-11-26 19:07:15.359737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.326 [2024-11-26 19:07:15.359831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.326 [2024-11-26 19:07:15.359853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.326 [2024-11-26 19:07:15.359864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.326 [2024-11-26 19:07:15.450352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.326 [2024-11-26 19:07:15.450445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.326 [2024-11-26 19:07:15.450467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.326 [2024-11-26 19:07:15.450480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.326 [2024-11-26 19:07:15.450583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.326 [2024-11-26 19:07:15.450602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.326 [2024-11-26 19:07:15.450614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.326 [2024-11-26 19:07:15.450626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.326 [2024-11-26 19:07:15.450663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.326 [2024-11-26 19:07:15.450691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.326 [2024-11-26 19:07:15.450703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.326 [2024-11-26 19:07:15.450714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.326 [2024-11-26 19:07:15.450862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.326 [2024-11-26 19:07:15.450886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.326 [2024-11-26 19:07:15.450899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.326 [2024-11-26 19:07:15.450910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.326 [2024-11-26 19:07:15.450973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.326 [2024-11-26 19:07:15.450993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:44.327 [2024-11-26 19:07:15.451013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.327 [2024-11-26 19:07:15.451025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.327 [2024-11-26 19:07:15.451081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.327 [2024-11-26 19:07:15.451097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.327 [2024-11-26 19:07:15.451109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.327 [2024-11-26 19:07:15.451120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.327 [2024-11-26 19:07:15.451203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.327 [2024-11-26 19:07:15.451231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.327 [2024-11-26 19:07:15.451244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.327 [2024-11-26 19:07:15.451256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.327 [2024-11-26 19:07:15.451435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.987 ms, result 0 00:22:45.702 00:22:45.702 00:22:45.702 19:07:16 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78769 00:22:45.702 19:07:16 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:45.702 19:07:16 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78769 00:22:45.702 19:07:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78769 ']' 00:22:45.702 19:07:16 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.702 19:07:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.702 19:07:16 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.702 19:07:16 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.702 19:07:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:45.702 [2024-11-26 19:07:16.731638] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:22:45.702 [2024-11-26 19:07:16.731894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78769 ] 00:22:45.702 [2024-11-26 19:07:16.911618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.960 [2024-11-26 19:07:17.041429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.932 19:07:17 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.932 19:07:17 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:46.932 19:07:17 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:47.189 [2024-11-26 19:07:18.260850] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:47.189 [2024-11-26 19:07:18.260947] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:47.447 [2024-11-26 19:07:18.457503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.447 [2024-11-26 19:07:18.457610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:47.447 [2024-11-26 19:07:18.457649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:47.447 [2024-11-26 19:07:18.457672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.447 [2024-11-26 19:07:18.462397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.447 [2024-11-26 19:07:18.462453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.447 [2024-11-26 19:07:18.462476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:22:47.447 [2024-11-26 19:07:18.462488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.462694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:47.448 [2024-11-26 19:07:18.463775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:47.448 [2024-11-26 19:07:18.463831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.463848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.448 [2024-11-26 19:07:18.463864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:22:47.448 [2024-11-26 19:07:18.463878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.465245] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:47.448 [2024-11-26 19:07:18.483537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.483795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:47.448 [2024-11-26 19:07:18.483831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.284 ms 00:22:47.448 [2024-11-26 19:07:18.483852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.484057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.484092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:47.448 [2024-11-26 19:07:18.484108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:47.448 [2024-11-26 19:07:18.484126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.489118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.489216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.448 [2024-11-26 19:07:18.489238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:22:47.448 [2024-11-26 19:07:18.489257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.489483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.489515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.448 [2024-11-26 19:07:18.489531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:47.448 [2024-11-26 19:07:18.489559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.489604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.489628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:47.448 [2024-11-26 19:07:18.489643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:47.448 [2024-11-26 19:07:18.489660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.489698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:47.448 [2024-11-26 19:07:18.494030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.494080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.448 [2024-11-26 19:07:18.494101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.338 ms 00:22:47.448 [2024-11-26 19:07:18.494113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.494287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.494310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:47.448 [2024-11-26 19:07:18.494330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:47.448 [2024-11-26 19:07:18.494341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.494376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:47.448 [2024-11-26 19:07:18.494404] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:47.448 [2024-11-26 19:07:18.494460] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:47.448 [2024-11-26 19:07:18.494485] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:47.448 [2024-11-26 19:07:18.494602] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:47.448 [2024-11-26 19:07:18.494628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:47.448 [2024-11-26 19:07:18.494654] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:47.448 [2024-11-26 19:07:18.494671] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:47.448 [2024-11-26 19:07:18.494697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:47.448 [2024-11-26 19:07:18.494713] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:47.448 [2024-11-26 19:07:18.494731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:47.448 [2024-11-26 19:07:18.494745] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:47.448 [2024-11-26 19:07:18.494766] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:47.448 [2024-11-26 19:07:18.494780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.494798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:47.448 [2024-11-26 19:07:18.494812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:22:47.448 [2024-11-26 19:07:18.494836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.494939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.448 [2024-11-26 19:07:18.494963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:47.448 [2024-11-26 19:07:18.494978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:47.448 [2024-11-26 19:07:18.494995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.448 [2024-11-26 19:07:18.495114] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:47.448 [2024-11-26 19:07:18.495139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:47.448 [2024-11-26 19:07:18.495155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:47.448 [2024-11-26 19:07:18.495221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:47.448 [2024-11-26 19:07:18.495270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.448 [2024-11-26 19:07:18.495294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:47.448 [2024-11-26 19:07:18.495307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:47.448 [2024-11-26 19:07:18.495318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.448 [2024-11-26 19:07:18.495331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:47.448 [2024-11-26 19:07:18.495342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:47.448 [2024-11-26 19:07:18.495354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:47.448 [2024-11-26 19:07:18.495378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:47.448 [2024-11-26 19:07:18.495427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:47.448 [2024-11-26 19:07:18.495466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:47.448 [2024-11-26 19:07:18.495500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:47.448 [2024-11-26 19:07:18.495559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.448 [2024-11-26 19:07:18.495583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:47.448 [2024-11-26 19:07:18.495593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.448 [2024-11-26 19:07:18.495619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:47.448 [2024-11-26 19:07:18.495632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:47.448 [2024-11-26 19:07:18.495642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.448 [2024-11-26 19:07:18.495655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:47.448 [2024-11-26 19:07:18.495666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:47.448 [2024-11-26 19:07:18.495680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.448 [2024-11-26 19:07:18.495691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:47.448 [2024-11-26 19:07:18.495704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:47.449 [2024-11-26 19:07:18.495714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.449 [2024-11-26 19:07:18.495727] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:47.449 [2024-11-26 19:07:18.495741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:47.449 [2024-11-26 19:07:18.495754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.449 [2024-11-26 19:07:18.495765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.449 [2024-11-26 19:07:18.495779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:47.449 [2024-11-26 19:07:18.495790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:47.449 [2024-11-26 19:07:18.495803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:47.449 [2024-11-26 19:07:18.495814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:47.449 [2024-11-26 19:07:18.495826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:47.449 [2024-11-26 19:07:18.495837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:47.449 [2024-11-26 19:07:18.495852] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:47.449 [2024-11-26 19:07:18.495868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.495885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:47.449 [2024-11-26 19:07:18.495897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:47.449 [2024-11-26 19:07:18.495913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:47.449 [2024-11-26 19:07:18.495925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:47.449 [2024-11-26 19:07:18.495941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:47.449 [2024-11-26 19:07:18.495953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:47.449 [2024-11-26 19:07:18.495967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:47.449 [2024-11-26 19:07:18.495978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:47.449 [2024-11-26 19:07:18.495992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:47.449 [2024-11-26 19:07:18.496004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.496017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.496029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.496043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.496056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:47.449 [2024-11-26 19:07:18.496069] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:47.449 [2024-11-26 19:07:18.496083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.496100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:47.449 [2024-11-26 19:07:18.496112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:47.449 [2024-11-26 19:07:18.496125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:47.449 [2024-11-26 19:07:18.496137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:47.449 [2024-11-26 19:07:18.496153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.496165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:47.449 [2024-11-26 19:07:18.496195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:22:47.449 [2024-11-26 19:07:18.496210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.531853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.532147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.449 [2024-11-26 19:07:18.532320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.552 ms 00:22:47.449 [2024-11-26 19:07:18.532488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.532749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.532901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:47.449 [2024-11-26 19:07:18.533051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:47.449 [2024-11-26 19:07:18.533108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.576281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.576590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.449 [2024-11-26 19:07:18.576730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.042 ms 00:22:47.449 [2024-11-26 19:07:18.576858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.577072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.577265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.449 [2024-11-26 19:07:18.577440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:47.449 [2024-11-26 19:07:18.577589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.578009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.578150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.449 [2024-11-26 19:07:18.578293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:22:47.449 [2024-11-26 19:07:18.578349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.578579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.578647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.449 [2024-11-26 19:07:18.578781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:22:47.449 [2024-11-26 19:07:18.578904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.598627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.598925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.449 [2024-11-26 19:07:18.599064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.632 ms 00:22:47.449 [2024-11-26 19:07:18.599237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.449 [2024-11-26 19:07:18.629126] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:47.449 [2024-11-26 19:07:18.629467] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:47.449 [2024-11-26 19:07:18.629644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.449 [2024-11-26 19:07:18.629699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:47.449 [2024-11-26 19:07:18.629829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.028 ms 00:22:47.449 [2024-11-26 19:07:18.629949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.660904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.661232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:47.708 [2024-11-26 19:07:18.661281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.617 ms 00:22:47.708 [2024-11-26 19:07:18.661298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.678108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.678212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:47.708 [2024-11-26 19:07:18.678251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.615 ms 00:22:47.708 [2024-11-26 19:07:18.678266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.694627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.694714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:47.708 [2024-11-26 19:07:18.694744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.124 ms 00:22:47.708 [2024-11-26 19:07:18.694758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.695793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.695839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:47.708 [2024-11-26 19:07:18.695865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:22:47.708 [2024-11-26 19:07:18.695879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.773389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.773477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:47.708 [2024-11-26 19:07:18.773504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.446 ms 00:22:47.708 [2024-11-26 19:07:18.773517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.786871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:47.708 [2024-11-26 19:07:18.801624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.801728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:47.708 [2024-11-26 19:07:18.801751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.909 ms 00:22:47.708 [2024-11-26 19:07:18.801766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.801941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.801966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:47.708 [2024-11-26 19:07:18.801980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:47.708 [2024-11-26 19:07:18.801994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.802061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.802097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:47.708 [2024-11-26 19:07:18.802111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:47.708 [2024-11-26 19:07:18.802137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.802202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.802233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:47.708 [2024-11-26 19:07:18.802248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:47.708 [2024-11-26 19:07:18.802266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.802319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:47.708 [2024-11-26 19:07:18.802351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.802371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:47.708 [2024-11-26 19:07:18.802390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:47.708 [2024-11-26 19:07:18.802408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.834914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.835003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:47.708 [2024-11-26 19:07:18.835030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.442 ms 00:22:47.708 [2024-11-26 19:07:18.835043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.835307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.708 [2024-11-26 19:07:18.835331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:47.708 [2024-11-26 19:07:18.835351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:47.708 [2024-11-26 19:07:18.835363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.708 [2024-11-26 19:07:18.836584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.708 [2024-11-26 19:07:18.841320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.623 ms, result 0 00:22:47.708 [2024-11-26 19:07:18.842378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.708 Some configs were skipped because the RPC state that can call them passed over. 00:22:47.708 19:07:18 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:48.273 [2024-11-26 19:07:19.198756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.273 [2024-11-26 19:07:19.199074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:48.273 [2024-11-26 19:07:19.199243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:22:48.273 [2024-11-26 19:07:19.199377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.273 [2024-11-26 19:07:19.199585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.238 ms, result 0 00:22:48.273 true 00:22:48.273 19:07:19 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:48.531 [2024-11-26 19:07:19.490695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.531 [2024-11-26 19:07:19.490945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:48.531 [2024-11-26 19:07:19.490991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:22:48.531 [2024-11-26 19:07:19.491007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.531 [2024-11-26 19:07:19.491084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.352 ms, result 0 00:22:48.531 true 00:22:48.531 19:07:19 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78769 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78769 ']' 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78769 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78769 00:22:48.531 killing process with pid 78769 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78769' 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78769 00:22:48.531 19:07:19 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78769 00:22:49.466 [2024-11-26 19:07:20.550693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.550779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:49.466 [2024-11-26 19:07:20.550802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:49.466 [2024-11-26 19:07:20.550817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.550853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:49.466 [2024-11-26 19:07:20.554223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.554267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:49.466 [2024-11-26 19:07:20.554292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.339 ms 00:22:49.466 [2024-11-26 19:07:20.554305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.554649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.554677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:49.466 [2024-11-26 19:07:20.554696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:49.466 [2024-11-26 19:07:20.554707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.559008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.559075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:49.466 [2024-11-26 19:07:20.559097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:22:49.466 [2024-11-26 19:07:20.559110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.566738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.566803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:49.466 [2024-11-26 19:07:20.566823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.564 ms 00:22:49.466 [2024-11-26 19:07:20.566836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.579774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.579873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:49.466 [2024-11-26 19:07:20.579903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.815 ms 00:22:49.466 [2024-11-26 19:07:20.579915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.588563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.588651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:49.466 [2024-11-26 19:07:20.588674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.520 ms 00:22:49.466 [2024-11-26 19:07:20.588687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.588869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.588890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:49.466 [2024-11-26 19:07:20.588907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:49.466 [2024-11-26 19:07:20.588923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.602294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.602379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:49.466 [2024-11-26 19:07:20.602404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.314 ms 00:22:49.466 [2024-11-26 19:07:20.602417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.615645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.615737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:49.466 [2024-11-26 19:07:20.615767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.109 ms 00:22:49.466 [2024-11-26 19:07:20.615779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-26 19:07:20.628536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-26 19:07:20.628620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:49.466 [2024-11-26 19:07:20.628644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.622 ms 00:22:49.467 [2024-11-26 19:07:20.628656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.467 [2024-11-26 19:07:20.641587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.467 [2024-11-26 19:07:20.641673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:49.467 [2024-11-26 19:07:20.641698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.776 ms 00:22:49.467 [2024-11-26 19:07:20.641710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.467 [2024-11-26 19:07:20.641816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:49.467 [2024-11-26 19:07:20.641845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.641991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.642988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.643006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.643019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.643036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.643049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.643066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:49.467 [2024-11-26 19:07:20.643079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:49.468 [2024-11-26 19:07:20.643409] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:49.468 [2024-11-26 19:07:20.643438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:22:49.468 [2024-11-26 19:07:20.643461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:49.468 [2024-11-26 19:07:20.643478] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:49.468 [2024-11-26 19:07:20.643491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:49.468 [2024-11-26 19:07:20.643508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:49.468 [2024-11-26 19:07:20.643521] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:49.468 [2024-11-26 19:07:20.643545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:49.468 [2024-11-26 19:07:20.643558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:49.468 [2024-11-26 19:07:20.643570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:49.468 [2024-11-26 19:07:20.643580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:49.468 [2024-11-26 19:07:20.643595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.468 [2024-11-26 19:07:20.643607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:49.468 [2024-11-26 19:07:20.643622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:22:49.468 [2024-11-26 19:07:20.643636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.468 [2024-11-26 19:07:20.660716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.468 [2024-11-26 19:07:20.660997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:49.468 [2024-11-26 19:07:20.661069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.003 ms 00:22:49.468 [2024-11-26 19:07:20.661094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.468 [2024-11-26 19:07:20.661824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.468 [2024-11-26 19:07:20.661871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:49.468 [2024-11-26 19:07:20.661909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:22:49.468 [2024-11-26 19:07:20.661931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.726 [2024-11-26 19:07:20.722310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.726 [2024-11-26 19:07:20.722385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.726 [2024-11-26 19:07:20.722429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.726 [2024-11-26 19:07:20.722451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.726 [2024-11-26 19:07:20.722679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.726 [2024-11-26 19:07:20.722711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.726 [2024-11-26 19:07:20.722761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.726 [2024-11-26 19:07:20.722783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.726 [2024-11-26 19:07:20.722906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.726 [2024-11-26 19:07:20.722937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.726 [2024-11-26 19:07:20.722979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.726 [2024-11-26 19:07:20.723002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.726 [2024-11-26 19:07:20.723059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.726 [2024-11-26 19:07:20.723084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.726 [2024-11-26 19:07:20.723115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.726 [2024-11-26 19:07:20.723147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.726 [2024-11-26 19:07:20.832312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.832423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.727 [2024-11-26 19:07:20.832479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.832503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.922327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.922569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.727 [2024-11-26 19:07:20.922640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.922664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.922841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.922874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.727 [2024-11-26 19:07:20.922915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.922940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.923011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.923038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.727 [2024-11-26 19:07:20.923069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.923092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.923339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.923372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.727 [2024-11-26 19:07:20.923405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.923429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.923543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.923585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:49.727 [2024-11-26 19:07:20.923621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.923643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.923740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.923768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.727 [2024-11-26 19:07:20.923807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.923831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.923931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.727 [2024-11-26 19:07:20.923961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.727 [2024-11-26 19:07:20.923993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.727 [2024-11-26 19:07:20.924020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.727 [2024-11-26 19:07:20.924329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.573 ms, result 0 00:22:51.102 19:07:21 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:51.102 19:07:21 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:51.102 [2024-11-26 19:07:21.964444] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:22:51.102 [2024-11-26 19:07:21.964629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78833 ] 00:22:51.102 [2024-11-26 19:07:22.164061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.102 [2024-11-26 19:07:22.277716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.671 [2024-11-26 19:07:22.611513] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:51.671 [2024-11-26 19:07:22.611900] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:51.671 [2024-11-26 19:07:22.777890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.777977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:51.671 [2024-11-26 19:07:22.778000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:51.671 [2024-11-26 19:07:22.778012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.782275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.782532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:51.671 [2024-11-26 19:07:22.782573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.230 ms 00:22:51.671 [2024-11-26 19:07:22.782592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.782901] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:51.671 [2024-11-26 19:07:22.784347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:51.671 [2024-11-26 19:07:22.784407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.784429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:51.671 [2024-11-26 19:07:22.784449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:22:51.671 [2024-11-26 19:07:22.784471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.786011] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:51.671 [2024-11-26 19:07:22.807427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.807568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:51.671 [2024-11-26 19:07:22.807603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.409 ms 00:22:51.671 [2024-11-26 19:07:22.807622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.807915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.807956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:51.671 [2024-11-26 19:07:22.807983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:51.671 [2024-11-26 19:07:22.808005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.813474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.813857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:51.671 [2024-11-26 19:07:22.813898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.372 ms 00:22:51.671 [2024-11-26 19:07:22.813916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.814155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.814216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:51.671 [2024-11-26 19:07:22.814238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:51.671 [2024-11-26 19:07:22.814263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.814325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.814348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:51.671 [2024-11-26 19:07:22.814366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:51.671 [2024-11-26 19:07:22.814382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.814422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:51.671 [2024-11-26 19:07:22.819025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.819083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:51.671 [2024-11-26 19:07:22.819102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:22:51.671 [2024-11-26 19:07:22.819113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.671 [2024-11-26 19:07:22.819271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.671 [2024-11-26 19:07:22.819302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:51.672 [2024-11-26 19:07:22.819317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:51.672 [2024-11-26 19:07:22.819335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.672 [2024-11-26 19:07:22.819371] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:51.672 [2024-11-26 19:07:22.819401] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:51.672 [2024-11-26 19:07:22.819446] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:51.672 [2024-11-26 19:07:22.819467] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:51.672 [2024-11-26 19:07:22.819611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:51.672 [2024-11-26 19:07:22.819642] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:51.672 [2024-11-26 19:07:22.819665] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:51.672 [2024-11-26 19:07:22.819686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:51.672 [2024-11-26 19:07:22.819700] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:51.672 [2024-11-26 19:07:22.819712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:51.672 [2024-11-26 19:07:22.819723] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:51.672 [2024-11-26 19:07:22.819734] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:51.672 [2024-11-26 19:07:22.819752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:51.672 [2024-11-26 19:07:22.819773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.672 [2024-11-26 19:07:22.819794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:51.672 [2024-11-26 19:07:22.819811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:22:51.672 [2024-11-26 19:07:22.819822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.672 [2024-11-26 19:07:22.819934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.672 [2024-11-26 19:07:22.819966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:51.672 [2024-11-26 19:07:22.819984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:51.672 [2024-11-26 19:07:22.820003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.672 [2024-11-26 19:07:22.820142] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:51.672 [2024-11-26 19:07:22.820163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:51.672 [2024-11-26 19:07:22.820192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:51.672 [2024-11-26 19:07:22.820240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:51.672 [2024-11-26 19:07:22.820301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:51.672 [2024-11-26 19:07:22.820339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:51.672 [2024-11-26 19:07:22.820377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:51.672 [2024-11-26 19:07:22.820399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:51.672 [2024-11-26 19:07:22.820419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:51.672 [2024-11-26 19:07:22.820438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:51.672 [2024-11-26 19:07:22.820456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:51.672 [2024-11-26 19:07:22.820481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:51.672 [2024-11-26 19:07:22.820511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:51.672 [2024-11-26 19:07:22.820542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:51.672 [2024-11-26 19:07:22.820582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:51.672 [2024-11-26 19:07:22.820636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:51.672 [2024-11-26 19:07:22.820675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:51.672 [2024-11-26 19:07:22.820697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:51.672 [2024-11-26 19:07:22.820708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:51.672 [2024-11-26 19:07:22.820726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:51.672 [2024-11-26 19:07:22.820744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:51.672 [2024-11-26 19:07:22.820762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:51.672 [2024-11-26 19:07:22.820778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:51.672 [2024-11-26 19:07:22.820808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:51.672 [2024-11-26 19:07:22.820827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:51.672 [2024-11-26 19:07:22.820872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:51.672 [2024-11-26 19:07:22.820889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:51.672 [2024-11-26 19:07:22.820923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:51.672 [2024-11-26 19:07:22.820939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:51.672 [2024-11-26 19:07:22.820949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:51.672 [2024-11-26 19:07:22.820959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:51.672 [2024-11-26 19:07:22.820969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:51.672 [2024-11-26 19:07:22.820980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:51.672 [2024-11-26 19:07:22.820992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:51.672 [2024-11-26 19:07:22.821007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:51.672 [2024-11-26 19:07:22.821019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:51.672 [2024-11-26 19:07:22.821032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:51.672 [2024-11-26 19:07:22.821044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:51.672 [2024-11-26 19:07:22.821063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:51.672 [2024-11-26 19:07:22.821083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:51.672 [2024-11-26 19:07:22.821103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:51.672 [2024-11-26 19:07:22.821116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:51.672 [2024-11-26 19:07:22.821128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:51.672 [2024-11-26 19:07:22.821139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:51.672 [2024-11-26 19:07:22.821151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:51.672 [2024-11-26 19:07:22.821162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:51.672 [2024-11-26 19:07:22.821553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:51.672 [2024-11-26 19:07:22.821718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:51.672 [2024-11-26 19:07:22.821888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:51.672 [2024-11-26 19:07:22.822053] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:51.672 [2024-11-26 19:07:22.822233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:51.672 [2024-11-26 19:07:22.822460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:51.673 [2024-11-26 19:07:22.822691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:51.673 [2024-11-26 19:07:22.822844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:51.673 [2024-11-26 19:07:22.822869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:51.673 [2024-11-26 19:07:22.822894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.673 [2024-11-26 19:07:22.822915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:51.673 [2024-11-26 19:07:22.822934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:22:51.673 [2024-11-26 19:07:22.822954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.673 [2024-11-26 19:07:22.857949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.673 [2024-11-26 19:07:22.858261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:51.673 [2024-11-26 19:07:22.858297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.907 ms 00:22:51.673 [2024-11-26 19:07:22.858320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.673 [2024-11-26 19:07:22.858527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.673 [2024-11-26 19:07:22.858548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:51.673 [2024-11-26 19:07:22.858562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:51.673 [2024-11-26 19:07:22.858573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.915810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.915888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:51.932 [2024-11-26 19:07:22.915910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.203 ms 00:22:51.932 [2024-11-26 19:07:22.915923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.916104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.916126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:51.932 [2024-11-26 19:07:22.916140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:51.932 [2024-11-26 19:07:22.916151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.916527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.916558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:51.932 [2024-11-26 19:07:22.916588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:51.932 [2024-11-26 19:07:22.916601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.916767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.916793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:51.932 [2024-11-26 19:07:22.916806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:51.932 [2024-11-26 19:07:22.916817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.934108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.934465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:51.932 [2024-11-26 19:07:22.934501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.248 ms 00:22:51.932 [2024-11-26 19:07:22.934515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.952045] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:51.932 [2024-11-26 19:07:22.952134] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:51.932 [2024-11-26 19:07:22.952158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.952190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:51.932 [2024-11-26 19:07:22.952209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.440 ms 00:22:51.932 [2024-11-26 19:07:22.952221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:22.983160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:22.983270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:51.932 [2024-11-26 19:07:22.983293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.739 ms 00:22:51.932 [2024-11-26 19:07:22.983330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.000097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.000193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:51.932 [2024-11-26 19:07:23.000217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.544 ms 00:22:51.932 [2024-11-26 19:07:23.000229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.016837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.016952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:51.932 [2024-11-26 19:07:23.016974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.421 ms 00:22:51.932 [2024-11-26 19:07:23.016986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.017942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.018117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:51.932 [2024-11-26 19:07:23.018146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:22:51.932 [2024-11-26 19:07:23.018158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.093590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.093681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:51.932 [2024-11-26 19:07:23.093704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.361 ms 00:22:51.932 [2024-11-26 19:07:23.093716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.107377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:51.932 [2024-11-26 19:07:23.121945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.122032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:51.932 [2024-11-26 19:07:23.122064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.031 ms 00:22:51.932 [2024-11-26 19:07:23.122077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.122275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.122298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:51.932 [2024-11-26 19:07:23.122311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:51.932 [2024-11-26 19:07:23.122322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.122393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.122410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:51.932 [2024-11-26 19:07:23.122427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:51.932 [2024-11-26 19:07:23.122442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.122487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.122505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:51.932 [2024-11-26 19:07:23.122517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:51.932 [2024-11-26 19:07:23.122528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.932 [2024-11-26 19:07:23.122575] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:51.932 [2024-11-26 19:07:23.122592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.932 [2024-11-26 19:07:23.122603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:51.932 [2024-11-26 19:07:23.122614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:51.933 [2024-11-26 19:07:23.122626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.191 [2024-11-26 19:07:23.155041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.191 [2024-11-26 19:07:23.155137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:52.191 [2024-11-26 19:07:23.155160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.376 ms 00:22:52.192 [2024-11-26 19:07:23.155192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.192 [2024-11-26 19:07:23.155436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.192 [2024-11-26 19:07:23.155458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:52.192 [2024-11-26 19:07:23.155472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:52.192 [2024-11-26 19:07:23.155489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.192 [2024-11-26 19:07:23.156672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:52.192 [2024-11-26 19:07:23.161354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.456 ms, result 0 00:22:52.192 [2024-11-26 19:07:23.162200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:52.192 [2024-11-26 19:07:23.179151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.126  [2024-11-26T19:07:25.275Z] Copying: 27/256 [MB] (27 MBps) [2024-11-26T19:07:26.210Z] Copying: 54/256 [MB] (26 MBps) [2024-11-26T19:07:27.585Z] Copying: 77/256 [MB] (23 MBps) [2024-11-26T19:07:28.520Z] Copying: 101/256 [MB] (23 MBps) [2024-11-26T19:07:29.456Z] Copying: 125/256 [MB] (24 MBps) [2024-11-26T19:07:30.396Z] Copying: 147/256 [MB] (22 MBps) [2024-11-26T19:07:31.330Z] Copying: 171/256 [MB] (23 MBps) [2024-11-26T19:07:32.265Z] Copying: 196/256 [MB] (24 MBps) [2024-11-26T19:07:33.201Z] Copying: 221/256 [MB] (24 MBps) [2024-11-26T19:07:33.769Z] Copying: 247/256 [MB] (26 MBps) [2024-11-26T19:07:33.769Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-26 19:07:33.509055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.554 [2024-11-26 19:07:33.521908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.554 [2024-11-26 19:07:33.522007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:02.554 [2024-11-26 19:07:33.522050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.555 [2024-11-26 19:07:33.522063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.522102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:02.555 [2024-11-26 19:07:33.525547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.525603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:02.555 [2024-11-26 19:07:33.525630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:23:02.555 [2024-11-26 19:07:33.525650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.526040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.526071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:02.555 [2024-11-26 19:07:33.526085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:23:02.555 [2024-11-26 19:07:33.526096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.529956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.530011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:02.555 [2024-11-26 19:07:33.530028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.820 ms 00:23:02.555 [2024-11-26 19:07:33.530039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.537792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.537864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:02.555 [2024-11-26 19:07:33.537883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.708 ms 00:23:02.555 [2024-11-26 19:07:33.537894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.577879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.577991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:02.555 [2024-11-26 19:07:33.578024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.817 ms 00:23:02.555 [2024-11-26 19:07:33.578044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.599101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.599218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:02.555 [2024-11-26 19:07:33.599249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.927 ms 00:23:02.555 [2024-11-26 19:07:33.599263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.599489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.599517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:02.555 [2024-11-26 19:07:33.599572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:02.555 [2024-11-26 19:07:33.599585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.632619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.632694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:02.555 [2024-11-26 19:07:33.632715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.002 ms 00:23:02.555 [2024-11-26 19:07:33.632727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.665158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.665260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:02.555 [2024-11-26 19:07:33.665282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.256 ms 00:23:02.555 [2024-11-26 19:07:33.665294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.697664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.697755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:02.555 [2024-11-26 19:07:33.697775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.272 ms 00:23:02.555 [2024-11-26 19:07:33.697788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.729953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.555 [2024-11-26 19:07:33.730056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:02.555 [2024-11-26 19:07:33.730077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.018 ms 00:23:02.555 [2024-11-26 19:07:33.730089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.555 [2024-11-26 19:07:33.730202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:02.555 [2024-11-26 19:07:33.730231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:02.555 [2024-11-26 19:07:33.730918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.730929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.730941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.730952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.730963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.730981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:02.556 [2024-11-26 19:07:33.731731] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:02.556 [2024-11-26 19:07:33.731742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:23:02.556 [2024-11-26 19:07:33.731754] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:02.556 [2024-11-26 19:07:33.731765] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:02.556 [2024-11-26 19:07:33.731783] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:02.556 [2024-11-26 19:07:33.731802] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:02.556 [2024-11-26 19:07:33.731822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:02.556 [2024-11-26 19:07:33.731861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:02.556 [2024-11-26 19:07:33.731873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:02.556 [2024-11-26 19:07:33.731883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:02.556 [2024-11-26 19:07:33.731893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:02.556 [2024-11-26 19:07:33.731905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.556 [2024-11-26 19:07:33.731916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:02.556 [2024-11-26 19:07:33.731929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:23:02.556 [2024-11-26 19:07:33.731940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.556 [2024-11-26 19:07:33.748963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.556 [2024-11-26 19:07:33.749024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:02.556 [2024-11-26 19:07:33.749045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.979 ms 00:23:02.556 [2024-11-26 19:07:33.749079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.556 [2024-11-26 19:07:33.749667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.556 [2024-11-26 19:07:33.749857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:02.556 [2024-11-26 19:07:33.749887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:23:02.556 [2024-11-26 19:07:33.749900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.797313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.797393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.815 [2024-11-26 19:07:33.797430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.797442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.797571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.797590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.815 [2024-11-26 19:07:33.797603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.797614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.797693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.797712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.815 [2024-11-26 19:07:33.797724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.797748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.797774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.797789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.815 [2024-11-26 19:07:33.797800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.797811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.902630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.902712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.815 [2024-11-26 19:07:33.902732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.902772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.990085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.990166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.815 [2024-11-26 19:07:33.990237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.990259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.990358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.990377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.815 [2024-11-26 19:07:33.990389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.990400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.990457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.990473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.815 [2024-11-26 19:07:33.990485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.990495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.990665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.990689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.815 [2024-11-26 19:07:33.990702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.990713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.990774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.990819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.815 [2024-11-26 19:07:33.990838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.990848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.990897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.990912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.815 [2024-11-26 19:07:33.990923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.990934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.991016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.815 [2024-11-26 19:07:33.991036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.815 [2024-11-26 19:07:33.991049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.815 [2024-11-26 19:07:33.991060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.815 [2024-11-26 19:07:33.991294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.389 ms, result 0 00:23:03.750 00:23:03.750 00:23:03.750 19:07:34 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:03.750 19:07:34 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:04.317 19:07:35 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:04.576 [2024-11-26 19:07:35.642215] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:23:04.576 [2024-11-26 19:07:35.642422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78971 ] 00:23:04.833 [2024-11-26 19:07:35.814907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.833 [2024-11-26 19:07:35.918255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.092 [2024-11-26 19:07:36.249663] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:05.092 [2024-11-26 19:07:36.249760] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:05.352 [2024-11-26 19:07:36.412678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.352 [2024-11-26 19:07:36.412997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:05.352 [2024-11-26 19:07:36.413040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:05.352 [2024-11-26 19:07:36.413056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.352 [2024-11-26 19:07:36.416602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.352 [2024-11-26 19:07:36.416656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.352 [2024-11-26 19:07:36.416677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:23:05.352 [2024-11-26 19:07:36.416689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.416877] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:05.353 [2024-11-26 19:07:36.417861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:05.353 [2024-11-26 19:07:36.417906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.417921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.353 [2024-11-26 19:07:36.417934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:23:05.353 [2024-11-26 19:07:36.417945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.419281] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:05.353 [2024-11-26 19:07:36.436371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.436478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:05.353 [2024-11-26 19:07:36.436502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.084 ms 00:23:05.353 [2024-11-26 19:07:36.436514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.436735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.436759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:05.353 [2024-11-26 19:07:36.436773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:05.353 [2024-11-26 19:07:36.436784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.441546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.441615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.353 [2024-11-26 19:07:36.441634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:23:05.353 [2024-11-26 19:07:36.441646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.441816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.441838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.353 [2024-11-26 19:07:36.441852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:05.353 [2024-11-26 19:07:36.441864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.441908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.441923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:05.353 [2024-11-26 19:07:36.441935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:05.353 [2024-11-26 19:07:36.441946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.441978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:05.353 [2024-11-26 19:07:36.446363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.446414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.353 [2024-11-26 19:07:36.446432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.393 ms 00:23:05.353 [2024-11-26 19:07:36.446443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.446535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.446553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:05.353 [2024-11-26 19:07:36.446566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:05.353 [2024-11-26 19:07:36.446578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.446644] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:05.353 [2024-11-26 19:07:36.446675] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:05.353 [2024-11-26 19:07:36.446720] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:05.353 [2024-11-26 19:07:36.446740] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:05.353 [2024-11-26 19:07:36.446854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:05.353 [2024-11-26 19:07:36.446869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:05.353 [2024-11-26 19:07:36.446883] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:05.353 [2024-11-26 19:07:36.446903] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:05.353 [2024-11-26 19:07:36.446916] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:05.353 [2024-11-26 19:07:36.446929] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:05.353 [2024-11-26 19:07:36.446940] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:05.353 [2024-11-26 19:07:36.446951] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:05.353 [2024-11-26 19:07:36.446962] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:05.353 [2024-11-26 19:07:36.446974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.446985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:05.353 [2024-11-26 19:07:36.446997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:05.353 [2024-11-26 19:07:36.447008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.447111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.353 [2024-11-26 19:07:36.447137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:05.353 [2024-11-26 19:07:36.447150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:05.353 [2024-11-26 19:07:36.447161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.353 [2024-11-26 19:07:36.447306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:05.353 [2024-11-26 19:07:36.447326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:05.353 [2024-11-26 19:07:36.447338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:05.353 [2024-11-26 19:07:36.447350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.353 [2024-11-26 19:07:36.447361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:05.353 [2024-11-26 19:07:36.447371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:05.353 [2024-11-26 19:07:36.447381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:05.353 [2024-11-26 19:07:36.447392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:05.353 [2024-11-26 19:07:36.447402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:05.353 [2024-11-26 19:07:36.447412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:05.353 [2024-11-26 19:07:36.447422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:05.353 [2024-11-26 19:07:36.447447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:05.353 [2024-11-26 19:07:36.447458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:05.353 [2024-11-26 19:07:36.447468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:05.353 [2024-11-26 19:07:36.447478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:05.353 [2024-11-26 19:07:36.447488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.353 [2024-11-26 19:07:36.447498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:05.353 [2024-11-26 19:07:36.447508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:05.353 [2024-11-26 19:07:36.447518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.353 [2024-11-26 19:07:36.447529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:05.353 [2024-11-26 19:07:36.447553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:05.353 [2024-11-26 19:07:36.447575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.353 [2024-11-26 19:07:36.447592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:05.354 [2024-11-26 19:07:36.447608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.354 [2024-11-26 19:07:36.447633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:05.354 [2024-11-26 19:07:36.447643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.354 [2024-11-26 19:07:36.447663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:05.354 [2024-11-26 19:07:36.447673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:05.354 [2024-11-26 19:07:36.447703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:05.354 [2024-11-26 19:07:36.447713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:05.354 [2024-11-26 19:07:36.447733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:05.354 [2024-11-26 19:07:36.447747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:05.354 [2024-11-26 19:07:36.447765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:05.354 [2024-11-26 19:07:36.447783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:05.354 [2024-11-26 19:07:36.447814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:05.354 [2024-11-26 19:07:36.447827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:05.354 [2024-11-26 19:07:36.447849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:05.354 [2024-11-26 19:07:36.447858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447868] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:05.354 [2024-11-26 19:07:36.447879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:05.354 [2024-11-26 19:07:36.447897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:05.354 [2024-11-26 19:07:36.447907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:05.354 [2024-11-26 19:07:36.447919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:05.354 [2024-11-26 19:07:36.447929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:05.354 [2024-11-26 19:07:36.447939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:05.354 [2024-11-26 19:07:36.447950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:05.354 [2024-11-26 19:07:36.447959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:05.354 [2024-11-26 19:07:36.447970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:05.354 [2024-11-26 19:07:36.447983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:05.354 [2024-11-26 19:07:36.447997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:05.354 [2024-11-26 19:07:36.448020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:05.354 [2024-11-26 19:07:36.448031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:05.354 [2024-11-26 19:07:36.448042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:05.354 [2024-11-26 19:07:36.448053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:05.354 [2024-11-26 19:07:36.448064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:05.354 [2024-11-26 19:07:36.448075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:05.354 [2024-11-26 19:07:36.448086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:05.354 [2024-11-26 19:07:36.448098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:05.354 [2024-11-26 19:07:36.448109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:05.354 [2024-11-26 19:07:36.448164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:05.354 [2024-11-26 19:07:36.448193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:05.354 [2024-11-26 19:07:36.448218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:05.354 [2024-11-26 19:07:36.448229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:05.354 [2024-11-26 19:07:36.448240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:05.354 [2024-11-26 19:07:36.448253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.354 [2024-11-26 19:07:36.448270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:05.354 [2024-11-26 19:07:36.448292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:23:05.354 [2024-11-26 19:07:36.448302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.354 [2024-11-26 19:07:36.482514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.354 [2024-11-26 19:07:36.482589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.354 [2024-11-26 19:07:36.482612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.125 ms 00:23:05.354 [2024-11-26 19:07:36.482624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.354 [2024-11-26 19:07:36.482848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.354 [2024-11-26 19:07:36.482871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:05.354 [2024-11-26 19:07:36.482884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:05.354 [2024-11-26 19:07:36.482896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.354 [2024-11-26 19:07:36.531388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.354 [2024-11-26 19:07:36.531708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.354 [2024-11-26 19:07:36.531759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.454 ms 00:23:05.354 [2024-11-26 19:07:36.531790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.354 [2024-11-26 19:07:36.531978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.354 [2024-11-26 19:07:36.531999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.354 [2024-11-26 19:07:36.532014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:05.354 [2024-11-26 19:07:36.532025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.354 [2024-11-26 19:07:36.532449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.354 [2024-11-26 19:07:36.532480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.355 [2024-11-26 19:07:36.532505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:23:05.355 [2024-11-26 19:07:36.532517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.355 [2024-11-26 19:07:36.532686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.355 [2024-11-26 19:07:36.532705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.355 [2024-11-26 19:07:36.532717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:05.355 [2024-11-26 19:07:36.532728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.355 [2024-11-26 19:07:36.554816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.355 [2024-11-26 19:07:36.555206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.355 [2024-11-26 19:07:36.555251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.050 ms 00:23:05.355 [2024-11-26 19:07:36.555272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.578548] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:05.613 [2024-11-26 19:07:36.578890] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:05.613 [2024-11-26 19:07:36.578924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.578937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:05.613 [2024-11-26 19:07:36.578953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.405 ms 00:23:05.613 [2024-11-26 19:07:36.578964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.609913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.610256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:05.613 [2024-11-26 19:07:36.610289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.767 ms 00:23:05.613 [2024-11-26 19:07:36.610304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.627221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.627315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:05.613 [2024-11-26 19:07:36.627336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.733 ms 00:23:05.613 [2024-11-26 19:07:36.627348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.644114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.644254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:05.613 [2024-11-26 19:07:36.644290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.578 ms 00:23:05.613 [2024-11-26 19:07:36.644310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.645410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.645563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:05.613 [2024-11-26 19:07:36.645591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:23:05.613 [2024-11-26 19:07:36.645604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.722113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.722221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:05.613 [2024-11-26 19:07:36.722244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.459 ms 00:23:05.613 [2024-11-26 19:07:36.722257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.735499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:05.613 [2024-11-26 19:07:36.749675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.749761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:05.613 [2024-11-26 19:07:36.749783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.209 ms 00:23:05.613 [2024-11-26 19:07:36.749806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.749988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.750008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:05.613 [2024-11-26 19:07:36.750022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:05.613 [2024-11-26 19:07:36.750034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.750109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.750125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:05.613 [2024-11-26 19:07:36.750138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:05.613 [2024-11-26 19:07:36.750155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.750248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.750274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:05.613 [2024-11-26 19:07:36.750287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:05.613 [2024-11-26 19:07:36.750300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.750350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:05.613 [2024-11-26 19:07:36.750368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.750379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:05.613 [2024-11-26 19:07:36.750391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:05.613 [2024-11-26 19:07:36.750402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.783553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.783641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:05.613 [2024-11-26 19:07:36.783664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.109 ms 00:23:05.613 [2024-11-26 19:07:36.783676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.783922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.613 [2024-11-26 19:07:36.783943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:05.613 [2024-11-26 19:07:36.783957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:05.613 [2024-11-26 19:07:36.783969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.613 [2024-11-26 19:07:36.785093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:05.613 [2024-11-26 19:07:36.789636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.070 ms, result 0 00:23:05.613 [2024-11-26 19:07:36.790582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:05.613 [2024-11-26 19:07:36.808196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:05.900  [2024-11-26T19:07:37.115Z] Copying: 4096/4096 [kB] (average 28 MBps)[2024-11-26 19:07:36.952126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:05.900 [2024-11-26 19:07:36.964800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:36.965067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:05.900 [2024-11-26 19:07:36.965228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:05.900 [2024-11-26 19:07:36.965283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:36.965416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:05.900 [2024-11-26 19:07:36.968964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:36.969143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:05.900 [2024-11-26 19:07:36.969275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:23:05.900 [2024-11-26 19:07:36.969325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:36.970975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:36.971126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:05.900 [2024-11-26 19:07:36.971258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:23:05.900 [2024-11-26 19:07:36.971310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:36.975359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:36.975516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:05.900 [2024-11-26 19:07:36.975641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 00:23:05.900 [2024-11-26 19:07:36.975690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:36.983328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:36.983573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:05.900 [2024-11-26 19:07:36.983686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.506 ms 00:23:05.900 [2024-11-26 19:07:36.983799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:37.016262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:37.016547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:05.900 [2024-11-26 19:07:37.016665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.331 ms 00:23:05.900 [2024-11-26 19:07:37.016805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:37.035263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:37.035578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:05.900 [2024-11-26 19:07:37.035712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.299 ms 00:23:05.900 [2024-11-26 19:07:37.035818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:37.036126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:37.036302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:05.900 [2024-11-26 19:07:37.036443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:05.900 [2024-11-26 19:07:37.036496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:37.069889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:37.069974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:05.900 [2024-11-26 19:07:37.069996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.274 ms 00:23:05.900 [2024-11-26 19:07:37.070010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.900 [2024-11-26 19:07:37.103026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.900 [2024-11-26 19:07:37.103366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:05.900 [2024-11-26 19:07:37.103398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.885 ms 00:23:05.900 [2024-11-26 19:07:37.103411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.159 [2024-11-26 19:07:37.146863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.159 [2024-11-26 19:07:37.147274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:06.159 [2024-11-26 19:07:37.147319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.325 ms 00:23:06.159 [2024-11-26 19:07:37.147339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.159 [2024-11-26 19:07:37.195563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.159 [2024-11-26 19:07:37.195685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:06.159 [2024-11-26 19:07:37.195717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.963 ms 00:23:06.159 [2024-11-26 19:07:37.195736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.159 [2024-11-26 19:07:37.195895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:06.159 [2024-11-26 19:07:37.195934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.195959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.195978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.195997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.196014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.196031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.196048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.196076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.196092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:06.159 [2024-11-26 19:07:37.196109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.196995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:06.160 [2024-11-26 19:07:37.197628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:06.161 [2024-11-26 19:07:37.197884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:06.161 [2024-11-26 19:07:37.197902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:23:06.161 [2024-11-26 19:07:37.197918] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:06.161 [2024-11-26 19:07:37.197944] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:06.161 [2024-11-26 19:07:37.197959] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:06.161 [2024-11-26 19:07:37.197976] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:06.161 [2024-11-26 19:07:37.197992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:06.161 [2024-11-26 19:07:37.198009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:06.161 [2024-11-26 19:07:37.198035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:06.161 [2024-11-26 19:07:37.198050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:06.161 [2024-11-26 19:07:37.198067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:06.161 [2024-11-26 19:07:37.198087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.161 [2024-11-26 19:07:37.198106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:06.161 [2024-11-26 19:07:37.198127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:23:06.161 [2024-11-26 19:07:37.198146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.161 [2024-11-26 19:07:37.218586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.161 [2024-11-26 19:07:37.218851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:06.161 [2024-11-26 19:07:37.218990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.371 ms 00:23:06.161 [2024-11-26 19:07:37.219042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.161 [2024-11-26 19:07:37.219692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.161 [2024-11-26 19:07:37.219840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:06.161 [2024-11-26 19:07:37.219958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:23:06.161 [2024-11-26 19:07:37.220064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.161 [2024-11-26 19:07:37.266663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.161 [2024-11-26 19:07:37.266968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.161 [2024-11-26 19:07:37.267086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.161 [2024-11-26 19:07:37.267230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.161 [2024-11-26 19:07:37.267395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.161 [2024-11-26 19:07:37.267448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.161 [2024-11-26 19:07:37.267565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.161 [2024-11-26 19:07:37.267674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.161 [2024-11-26 19:07:37.267796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.161 [2024-11-26 19:07:37.267858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.161 [2024-11-26 19:07:37.267965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.161 [2024-11-26 19:07:37.268069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.161 [2024-11-26 19:07:37.268146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.161 [2024-11-26 19:07:37.268270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.161 [2024-11-26 19:07:37.268381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.161 [2024-11-26 19:07:37.268433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.375282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.375586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.420 [2024-11-26 19:07:37.375748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.375810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.464372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.464476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.420 [2024-11-26 19:07:37.464497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.464510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.464609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.464628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.420 [2024-11-26 19:07:37.464640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.464652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.464687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.464714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.420 [2024-11-26 19:07:37.464726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.464737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.464874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.464895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.420 [2024-11-26 19:07:37.464907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.464918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.464970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.464988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:06.420 [2024-11-26 19:07:37.465007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.465018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.465067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.465081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.420 [2024-11-26 19:07:37.465093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.465104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.465159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.420 [2024-11-26 19:07:37.465215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.420 [2024-11-26 19:07:37.465228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.420 [2024-11-26 19:07:37.465239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.420 [2024-11-26 19:07:37.465423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.641 ms, result 0 00:23:07.356 00:23:07.356 00:23:07.356 19:07:38 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79002 00:23:07.356 19:07:38 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:07.356 19:07:38 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79002 00:23:07.356 19:07:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79002 ']' 00:23:07.356 19:07:38 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.356 19:07:38 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.356 19:07:38 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.356 19:07:38 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.356 19:07:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:07.356 [2024-11-26 19:07:38.552257] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:23:07.356 [2024-11-26 19:07:38.552420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79002 ] 00:23:07.614 [2024-11-26 19:07:38.724874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.614 [2024-11-26 19:07:38.827375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.545 19:07:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.545 19:07:39 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:08.545 19:07:39 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:09.112 [2024-11-26 19:07:40.024470] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.112 [2024-11-26 19:07:40.024571] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.112 [2024-11-26 19:07:40.215960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.216043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.112 [2024-11-26 19:07:40.216069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:09.112 [2024-11-26 19:07:40.216083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.220849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.220921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.112 [2024-11-26 19:07:40.220945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:23:09.112 [2024-11-26 19:07:40.220959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.221334] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.112 [2024-11-26 19:07:40.222541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.112 [2024-11-26 19:07:40.222748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.222782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.112 [2024-11-26 19:07:40.222812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:23:09.112 [2024-11-26 19:07:40.222840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.224232] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:09.112 [2024-11-26 19:07:40.244106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.244284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:09.112 [2024-11-26 19:07:40.244323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.873 ms 00:23:09.112 [2024-11-26 19:07:40.244348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.244668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.244734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:09.112 [2024-11-26 19:07:40.244768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:09.112 [2024-11-26 19:07:40.244804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.250432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.250528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.112 [2024-11-26 19:07:40.250551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.495 ms 00:23:09.112 [2024-11-26 19:07:40.250571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.250874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.250911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.112 [2024-11-26 19:07:40.250929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:23:09.112 [2024-11-26 19:07:40.250957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.251010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.251051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.112 [2024-11-26 19:07:40.251078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:09.112 [2024-11-26 19:07:40.251099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.251141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:09.112 [2024-11-26 19:07:40.255615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.255671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.112 [2024-11-26 19:07:40.255697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.475 ms 00:23:09.112 [2024-11-26 19:07:40.255711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.255850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.112 [2024-11-26 19:07:40.255873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.112 [2024-11-26 19:07:40.255900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:09.112 [2024-11-26 19:07:40.255913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.112 [2024-11-26 19:07:40.255953] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:09.112 [2024-11-26 19:07:40.255987] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:09.113 [2024-11-26 19:07:40.256051] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:09.113 [2024-11-26 19:07:40.256077] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:09.113 [2024-11-26 19:07:40.256223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:09.113 [2024-11-26 19:07:40.256245] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.113 [2024-11-26 19:07:40.256279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:09.113 [2024-11-26 19:07:40.256296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.113 [2024-11-26 19:07:40.256317] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.113 [2024-11-26 19:07:40.256331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:09.113 [2024-11-26 19:07:40.256347] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.113 [2024-11-26 19:07:40.256360] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:09.113 [2024-11-26 19:07:40.256382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:09.113 [2024-11-26 19:07:40.256397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.113 [2024-11-26 19:07:40.256414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.113 [2024-11-26 19:07:40.256428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:23:09.113 [2024-11-26 19:07:40.256451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.113 [2024-11-26 19:07:40.256601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.113 [2024-11-26 19:07:40.256654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.113 [2024-11-26 19:07:40.256683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:09.113 [2024-11-26 19:07:40.256717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.113 [2024-11-26 19:07:40.256896] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.113 [2024-11-26 19:07:40.256932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.113 [2024-11-26 19:07:40.256948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.113 [2024-11-26 19:07:40.256966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.256980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.113 [2024-11-26 19:07:40.256997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:09.113 [2024-11-26 19:07:40.257033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.113 [2024-11-26 19:07:40.257046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.113 [2024-11-26 19:07:40.257074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.113 [2024-11-26 19:07:40.257091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:09.113 [2024-11-26 19:07:40.257102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.113 [2024-11-26 19:07:40.257119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.113 [2024-11-26 19:07:40.257131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:09.113 [2024-11-26 19:07:40.257147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.113 [2024-11-26 19:07:40.257192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:09.113 [2024-11-26 19:07:40.257222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.113 [2024-11-26 19:07:40.257253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.113 [2024-11-26 19:07:40.257281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.113 [2024-11-26 19:07:40.257302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.113 [2024-11-26 19:07:40.257330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.113 [2024-11-26 19:07:40.257343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.113 [2024-11-26 19:07:40.257379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.113 [2024-11-26 19:07:40.257405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.113 [2024-11-26 19:07:40.257464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.113 [2024-11-26 19:07:40.257722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.113 [2024-11-26 19:07:40.257796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.113 [2024-11-26 19:07:40.257825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:09.113 [2024-11-26 19:07:40.257849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.113 [2024-11-26 19:07:40.257875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:09.113 [2024-11-26 19:07:40.257899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:09.113 [2024-11-26 19:07:40.257928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:09.113 [2024-11-26 19:07:40.257965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:09.113 [2024-11-26 19:07:40.257976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.257989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.113 [2024-11-26 19:07:40.258005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.113 [2024-11-26 19:07:40.258019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.113 [2024-11-26 19:07:40.258030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.113 [2024-11-26 19:07:40.258044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.113 [2024-11-26 19:07:40.258056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.113 [2024-11-26 19:07:40.258069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.113 [2024-11-26 19:07:40.258081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.113 [2024-11-26 19:07:40.258093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.113 [2024-11-26 19:07:40.258105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.113 [2024-11-26 19:07:40.258120] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.113 [2024-11-26 19:07:40.258136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:09.113 [2024-11-26 19:07:40.258199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:09.113 [2024-11-26 19:07:40.258223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:09.113 [2024-11-26 19:07:40.258236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:09.113 [2024-11-26 19:07:40.258254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:09.113 [2024-11-26 19:07:40.258267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:09.113 [2024-11-26 19:07:40.258285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:09.113 [2024-11-26 19:07:40.258298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:09.113 [2024-11-26 19:07:40.258314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:09.113 [2024-11-26 19:07:40.258328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:09.113 [2024-11-26 19:07:40.258405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.113 [2024-11-26 19:07:40.258420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.113 [2024-11-26 19:07:40.258457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.113 [2024-11-26 19:07:40.258474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.113 [2024-11-26 19:07:40.258486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.113 [2024-11-26 19:07:40.258507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.113 [2024-11-26 19:07:40.258520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.113 [2024-11-26 19:07:40.258539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:23:09.113 [2024-11-26 19:07:40.258557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.114 [2024-11-26 19:07:40.294352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.114 [2024-11-26 19:07:40.294638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:09.114 [2024-11-26 19:07:40.294886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.695 ms 00:23:09.114 [2024-11-26 19:07:40.295062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.114 [2024-11-26 19:07:40.295534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.114 [2024-11-26 19:07:40.295715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:09.114 [2024-11-26 19:07:40.295916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:09.114 [2024-11-26 19:07:40.296076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.339300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.339595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.372 [2024-11-26 19:07:40.339788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.997 ms 00:23:09.372 [2024-11-26 19:07:40.339971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.340383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.340541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.372 [2024-11-26 19:07:40.340742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:09.372 [2024-11-26 19:07:40.340901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.341480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.341636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.372 [2024-11-26 19:07:40.341838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:23:09.372 [2024-11-26 19:07:40.342005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.342380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.342531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.372 [2024-11-26 19:07:40.342736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:23:09.372 [2024-11-26 19:07:40.342894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.362366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.362644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.372 [2024-11-26 19:07:40.362846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.275 ms 00:23:09.372 [2024-11-26 19:07:40.363007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.392508] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:09.372 [2024-11-26 19:07:40.392798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:09.372 [2024-11-26 19:07:40.392873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.392893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:09.372 [2024-11-26 19:07:40.392915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.581 ms 00:23:09.372 [2024-11-26 19:07:40.392953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.425753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.425856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:09.372 [2024-11-26 19:07:40.425888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.588 ms 00:23:09.372 [2024-11-26 19:07:40.425903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.443052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.443140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:09.372 [2024-11-26 19:07:40.443205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:23:09.372 [2024-11-26 19:07:40.443224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.459457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.459551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:09.372 [2024-11-26 19:07:40.459582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.990 ms 00:23:09.372 [2024-11-26 19:07:40.459596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.460614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.460656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:09.372 [2024-11-26 19:07:40.460681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:23:09.372 [2024-11-26 19:07:40.460695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.536572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.536670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:09.372 [2024-11-26 19:07:40.536702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.818 ms 00:23:09.372 [2024-11-26 19:07:40.536716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.550084] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:09.372 [2024-11-26 19:07:40.565321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.565442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:09.372 [2024-11-26 19:07:40.565466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.424 ms 00:23:09.372 [2024-11-26 19:07:40.565487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.565697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.565740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:09.372 [2024-11-26 19:07:40.565770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:09.372 [2024-11-26 19:07:40.565800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.372 [2024-11-26 19:07:40.565885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.372 [2024-11-26 19:07:40.565920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:09.373 [2024-11-26 19:07:40.565936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:09.373 [2024-11-26 19:07:40.565962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.373 [2024-11-26 19:07:40.565999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.373 [2024-11-26 19:07:40.566034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:09.373 [2024-11-26 19:07:40.566051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:09.373 [2024-11-26 19:07:40.566071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.373 [2024-11-26 19:07:40.566124] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:09.373 [2024-11-26 19:07:40.566154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.373 [2024-11-26 19:07:40.566208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:09.373 [2024-11-26 19:07:40.566230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:09.373 [2024-11-26 19:07:40.566250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.631 [2024-11-26 19:07:40.599413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.631 [2024-11-26 19:07:40.599504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:09.631 [2024-11-26 19:07:40.599535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.096 ms 00:23:09.631 [2024-11-26 19:07:40.599559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.631 [2024-11-26 19:07:40.599788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.631 [2024-11-26 19:07:40.599811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:09.631 [2024-11-26 19:07:40.599841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:09.631 [2024-11-26 19:07:40.599854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.631 [2024-11-26 19:07:40.601100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.631 [2024-11-26 19:07:40.605568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.727 ms, result 0 00:23:09.631 [2024-11-26 19:07:40.606666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:09.631 Some configs were skipped because the RPC state that can call them passed over. 00:23:09.631 19:07:40 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:09.890 [2024-11-26 19:07:40.925164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.890 [2024-11-26 19:07:40.925485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:09.890 [2024-11-26 19:07:40.925620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:23:09.890 [2024-11-26 19:07:40.925796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.890 [2024-11-26 19:07:40.925917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.095 ms, result 0 00:23:09.890 true 00:23:09.890 19:07:40 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:10.149 [2024-11-26 19:07:41.181165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.149 [2024-11-26 19:07:41.181446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:10.149 [2024-11-26 19:07:41.181618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:23:10.149 [2024-11-26 19:07:41.181747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.149 [2024-11-26 19:07:41.181871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.674 ms, result 0 00:23:10.149 true 00:23:10.149 19:07:41 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79002 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79002 ']' 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79002 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79002 00:23:10.149 killing process with pid 79002 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79002' 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79002 00:23:10.149 19:07:41 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79002 00:23:11.084 [2024-11-26 19:07:42.204801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.204888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.084 [2024-11-26 19:07:42.204910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:11.084 [2024-11-26 19:07:42.204925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.084 [2024-11-26 19:07:42.204962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:11.084 [2024-11-26 19:07:42.208371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.208420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.084 [2024-11-26 19:07:42.208445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:23:11.084 [2024-11-26 19:07:42.208457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.084 [2024-11-26 19:07:42.208792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.208817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.084 [2024-11-26 19:07:42.208834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:23:11.084 [2024-11-26 19:07:42.208847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.084 [2024-11-26 19:07:42.212957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.213008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.084 [2024-11-26 19:07:42.213029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:23:11.084 [2024-11-26 19:07:42.213042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.084 [2024-11-26 19:07:42.220650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.220721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.084 [2024-11-26 19:07:42.220742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.547 ms 00:23:11.084 [2024-11-26 19:07:42.220755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.084 [2024-11-26 19:07:42.233721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.233828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.084 [2024-11-26 19:07:42.233857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.845 ms 00:23:11.084 [2024-11-26 19:07:42.233870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.084 [2024-11-26 19:07:42.242444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.084 [2024-11-26 19:07:42.242533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.084 [2024-11-26 19:07:42.242556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.448 ms 00:23:11.084 [2024-11-26 19:07:42.242570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.085 [2024-11-26 19:07:42.242771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.085 [2024-11-26 19:07:42.242793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.085 [2024-11-26 19:07:42.242810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:11.085 [2024-11-26 19:07:42.242823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.085 [2024-11-26 19:07:42.256365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.085 [2024-11-26 19:07:42.256476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:11.085 [2024-11-26 19:07:42.256516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.498 ms 00:23:11.085 [2024-11-26 19:07:42.256532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.085 [2024-11-26 19:07:42.269652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.085 [2024-11-26 19:07:42.269744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.085 [2024-11-26 19:07:42.269779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.945 ms 00:23:11.085 [2024-11-26 19:07:42.269794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.085 [2024-11-26 19:07:42.283898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.085 [2024-11-26 19:07:42.283987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.085 [2024-11-26 19:07:42.284016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.957 ms 00:23:11.085 [2024-11-26 19:07:42.284031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.085 [2024-11-26 19:07:42.296893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.085 [2024-11-26 19:07:42.296987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.085 [2024-11-26 19:07:42.297016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.721 ms 00:23:11.085 [2024-11-26 19:07:42.297029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.085 [2024-11-26 19:07:42.297112] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.085 [2024-11-26 19:07:42.297139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.085 [2024-11-26 19:07:42.297804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.297997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.344 [2024-11-26 19:07:42.298237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.345 [2024-11-26 19:07:42.298916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.345 [2024-11-26 19:07:42.298941] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:23:11.345 [2024-11-26 19:07:42.298960] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.345 [2024-11-26 19:07:42.298977] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.345 [2024-11-26 19:07:42.298990] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.345 [2024-11-26 19:07:42.299007] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.345 [2024-11-26 19:07:42.299020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.345 [2024-11-26 19:07:42.299039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.345 [2024-11-26 19:07:42.299051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.345 [2024-11-26 19:07:42.299067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.345 [2024-11-26 19:07:42.299078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.345 [2024-11-26 19:07:42.299097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.345 [2024-11-26 19:07:42.299118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.345 [2024-11-26 19:07:42.299138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:23:11.345 [2024-11-26 19:07:42.299158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.316546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.345 [2024-11-26 19:07:42.316802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.345 [2024-11-26 19:07:42.316854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.272 ms 00:23:11.345 [2024-11-26 19:07:42.316871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.317459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.345 [2024-11-26 19:07:42.317489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.345 [2024-11-26 19:07:42.317519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:23:11.345 [2024-11-26 19:07:42.317533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.377820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.345 [2024-11-26 19:07:42.377896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.345 [2024-11-26 19:07:42.377921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.345 [2024-11-26 19:07:42.377935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.378094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.345 [2024-11-26 19:07:42.378113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.345 [2024-11-26 19:07:42.378136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.345 [2024-11-26 19:07:42.378148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.378249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.345 [2024-11-26 19:07:42.378270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.345 [2024-11-26 19:07:42.378300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.345 [2024-11-26 19:07:42.378314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.378349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.345 [2024-11-26 19:07:42.378365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.345 [2024-11-26 19:07:42.378383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.345 [2024-11-26 19:07:42.378402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.345 [2024-11-26 19:07:42.500489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.345 [2024-11-26 19:07:42.500577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.345 [2024-11-26 19:07:42.500602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.345 [2024-11-26 19:07:42.500615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.599272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.599645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.604 [2024-11-26 19:07:42.599705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.599722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.599888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.599908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.604 [2024-11-26 19:07:42.599934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.599947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.599993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.600010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.604 [2024-11-26 19:07:42.600029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.600041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.600236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.600259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.604 [2024-11-26 19:07:42.600279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.600292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.600371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.600391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.604 [2024-11-26 19:07:42.600412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.600425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.600491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.600508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.604 [2024-11-26 19:07:42.600531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.600544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.600609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.604 [2024-11-26 19:07:42.600628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.604 [2024-11-26 19:07:42.600646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.604 [2024-11-26 19:07:42.600659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.604 [2024-11-26 19:07:42.600884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.058 ms, result 0 00:23:12.539 19:07:43 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:12.539 [2024-11-26 19:07:43.660494] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:23:12.539 [2024-11-26 19:07:43.660653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79068 ] 00:23:12.798 [2024-11-26 19:07:43.839151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.798 [2024-11-26 19:07:43.996891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.364 [2024-11-26 19:07:44.387402] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:13.364 [2024-11-26 19:07:44.387533] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:13.364 [2024-11-26 19:07:44.553878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.364 [2024-11-26 19:07:44.553958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:13.364 [2024-11-26 19:07:44.553981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:13.364 [2024-11-26 19:07:44.553994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.364 [2024-11-26 19:07:44.557836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.364 [2024-11-26 19:07:44.557901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:13.364 [2024-11-26 19:07:44.557921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:23:13.364 [2024-11-26 19:07:44.557933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.364 [2024-11-26 19:07:44.558125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:13.364 [2024-11-26 19:07:44.559124] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:13.364 [2024-11-26 19:07:44.559189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.364 [2024-11-26 19:07:44.559207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:13.364 [2024-11-26 19:07:44.559220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:23:13.364 [2024-11-26 19:07:44.559230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.364 [2024-11-26 19:07:44.560541] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:13.364 [2024-11-26 19:07:44.577524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.577861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:13.625 [2024-11-26 19:07:44.577895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.980 ms 00:23:13.625 [2024-11-26 19:07:44.577908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.578129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.578152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:13.625 [2024-11-26 19:07:44.578207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:13.625 [2024-11-26 19:07:44.578221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.583157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.583244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:13.625 [2024-11-26 19:07:44.583263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:23:13.625 [2024-11-26 19:07:44.583275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.583446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.583470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:13.625 [2024-11-26 19:07:44.583485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:13.625 [2024-11-26 19:07:44.583496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.583544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.583580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:13.625 [2024-11-26 19:07:44.583593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:13.625 [2024-11-26 19:07:44.583605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.583645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:13.625 [2024-11-26 19:07:44.587971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.588190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:13.625 [2024-11-26 19:07:44.588220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:23:13.625 [2024-11-26 19:07:44.588233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.588360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.588381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:13.625 [2024-11-26 19:07:44.588394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:13.625 [2024-11-26 19:07:44.588405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.625 [2024-11-26 19:07:44.588446] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:13.625 [2024-11-26 19:07:44.588477] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:13.625 [2024-11-26 19:07:44.588520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:13.625 [2024-11-26 19:07:44.588541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:13.625 [2024-11-26 19:07:44.588655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:13.625 [2024-11-26 19:07:44.588670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:13.625 [2024-11-26 19:07:44.588685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:13.625 [2024-11-26 19:07:44.588704] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:13.625 [2024-11-26 19:07:44.588718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:13.625 [2024-11-26 19:07:44.588730] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:13.625 [2024-11-26 19:07:44.588740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:13.625 [2024-11-26 19:07:44.588751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:13.625 [2024-11-26 19:07:44.588762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:13.625 [2024-11-26 19:07:44.588774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.625 [2024-11-26 19:07:44.588785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:13.626 [2024-11-26 19:07:44.588796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:23:13.626 [2024-11-26 19:07:44.588807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.626 [2024-11-26 19:07:44.588910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.626 [2024-11-26 19:07:44.588931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:13.626 [2024-11-26 19:07:44.588943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:13.626 [2024-11-26 19:07:44.588953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.626 [2024-11-26 19:07:44.589082] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:13.626 [2024-11-26 19:07:44.589100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:13.626 [2024-11-26 19:07:44.589113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:13.626 [2024-11-26 19:07:44.589145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:13.626 [2024-11-26 19:07:44.589207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.626 [2024-11-26 19:07:44.589229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:13.626 [2024-11-26 19:07:44.589275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:13.626 [2024-11-26 19:07:44.589300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:13.626 [2024-11-26 19:07:44.589320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:13.626 [2024-11-26 19:07:44.589339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:13.626 [2024-11-26 19:07:44.589358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:13.626 [2024-11-26 19:07:44.589396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:13.626 [2024-11-26 19:07:44.589455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:13.626 [2024-11-26 19:07:44.589516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:13.626 [2024-11-26 19:07:44.589576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:13.626 [2024-11-26 19:07:44.589634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:13.626 [2024-11-26 19:07:44.589692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.626 [2024-11-26 19:07:44.589730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:13.626 [2024-11-26 19:07:44.589748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:13.626 [2024-11-26 19:07:44.589768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:13.626 [2024-11-26 19:07:44.589789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:13.626 [2024-11-26 19:07:44.589809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:13.626 [2024-11-26 19:07:44.589828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:13.626 [2024-11-26 19:07:44.589864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:13.626 [2024-11-26 19:07:44.589883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589904] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:13.626 [2024-11-26 19:07:44.589925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:13.626 [2024-11-26 19:07:44.589957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:13.626 [2024-11-26 19:07:44.589978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:13.626 [2024-11-26 19:07:44.589998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:13.626 [2024-11-26 19:07:44.590018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:13.626 [2024-11-26 19:07:44.590039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:13.626 [2024-11-26 19:07:44.590061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:13.626 [2024-11-26 19:07:44.590080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:13.626 [2024-11-26 19:07:44.590102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:13.626 [2024-11-26 19:07:44.590140] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:13.626 [2024-11-26 19:07:44.590165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.626 [2024-11-26 19:07:44.590205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:13.626 [2024-11-26 19:07:44.590223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:13.626 [2024-11-26 19:07:44.590243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:13.626 [2024-11-26 19:07:44.590266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:13.626 [2024-11-26 19:07:44.590287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:13.626 [2024-11-26 19:07:44.590307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:13.626 [2024-11-26 19:07:44.590328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:13.626 [2024-11-26 19:07:44.590349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:13.626 [2024-11-26 19:07:44.590370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:13.626 [2024-11-26 19:07:44.590390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:13.626 [2024-11-26 19:07:44.590410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:13.627 [2024-11-26 19:07:44.590430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:13.627 [2024-11-26 19:07:44.590450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:13.627 [2024-11-26 19:07:44.590471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:13.627 [2024-11-26 19:07:44.590492] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:13.627 [2024-11-26 19:07:44.590514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:13.627 [2024-11-26 19:07:44.590536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:13.627 [2024-11-26 19:07:44.590559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:13.627 [2024-11-26 19:07:44.590579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:13.627 [2024-11-26 19:07:44.590599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:13.627 [2024-11-26 19:07:44.590623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.590656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:13.627 [2024-11-26 19:07:44.590678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:23:13.627 [2024-11-26 19:07:44.590699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.625430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.625505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:13.627 [2024-11-26 19:07:44.625527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.607 ms 00:23:13.627 [2024-11-26 19:07:44.625540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.625765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.625788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:13.627 [2024-11-26 19:07:44.625802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:13.627 [2024-11-26 19:07:44.625814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.679011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.679090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:13.627 [2024-11-26 19:07:44.679118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.160 ms 00:23:13.627 [2024-11-26 19:07:44.679130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.679348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.679372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:13.627 [2024-11-26 19:07:44.679385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:13.627 [2024-11-26 19:07:44.679397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.679774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.679800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:13.627 [2024-11-26 19:07:44.679824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:23:13.627 [2024-11-26 19:07:44.679835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.680001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.680022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:13.627 [2024-11-26 19:07:44.680034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:23:13.627 [2024-11-26 19:07:44.680046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.698417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.698492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:13.627 [2024-11-26 19:07:44.698514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.337 ms 00:23:13.627 [2024-11-26 19:07:44.698527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.715565] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:13.627 [2024-11-26 19:07:44.715662] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:13.627 [2024-11-26 19:07:44.715688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.715701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:13.627 [2024-11-26 19:07:44.715718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.960 ms 00:23:13.627 [2024-11-26 19:07:44.715731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.746730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.746829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:13.627 [2024-11-26 19:07:44.746852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.810 ms 00:23:13.627 [2024-11-26 19:07:44.746864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.763871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.763967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:13.627 [2024-11-26 19:07:44.763989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.793 ms 00:23:13.627 [2024-11-26 19:07:44.764003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.781650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.781982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:13.627 [2024-11-26 19:07:44.782015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.402 ms 00:23:13.627 [2024-11-26 19:07:44.782028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.627 [2024-11-26 19:07:44.782999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.627 [2024-11-26 19:07:44.783040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:13.627 [2024-11-26 19:07:44.783058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:23:13.627 [2024-11-26 19:07:44.783070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.886 [2024-11-26 19:07:44.859006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.859116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:13.887 [2024-11-26 19:07:44.859139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.894 ms 00:23:13.887 [2024-11-26 19:07:44.859152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.872457] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:13.887 [2024-11-26 19:07:44.887070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.887471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:13.887 [2024-11-26 19:07:44.887510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.698 ms 00:23:13.887 [2024-11-26 19:07:44.887538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.887755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.887778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:13.887 [2024-11-26 19:07:44.887792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:13.887 [2024-11-26 19:07:44.887804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.887884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.887903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:13.887 [2024-11-26 19:07:44.887915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:13.887 [2024-11-26 19:07:44.887933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.887982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.888001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:13.887 [2024-11-26 19:07:44.888014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:13.887 [2024-11-26 19:07:44.888025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.888075] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:13.887 [2024-11-26 19:07:44.888093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.888105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:13.887 [2024-11-26 19:07:44.888117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:13.887 [2024-11-26 19:07:44.888128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.920544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.920628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:13.887 [2024-11-26 19:07:44.920650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.382 ms 00:23:13.887 [2024-11-26 19:07:44.920664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.920934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:13.887 [2024-11-26 19:07:44.920969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:13.887 [2024-11-26 19:07:44.920993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:13.887 [2024-11-26 19:07:44.921015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:13.887 [2024-11-26 19:07:44.922320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:13.887 [2024-11-26 19:07:44.927030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.080 ms, result 0 00:23:13.887 [2024-11-26 19:07:44.927858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:13.887 [2024-11-26 19:07:44.944783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.826  [2024-11-26T19:07:47.414Z] Copying: 29/256 [MB] (29 MBps) [2024-11-26T19:07:48.348Z] Copying: 55/256 [MB] (25 MBps) [2024-11-26T19:07:49.284Z] Copying: 82/256 [MB] (27 MBps) [2024-11-26T19:07:50.218Z] Copying: 108/256 [MB] (25 MBps) [2024-11-26T19:07:51.152Z] Copying: 132/256 [MB] (24 MBps) [2024-11-26T19:07:52.086Z] Copying: 156/256 [MB] (23 MBps) [2024-11-26T19:07:53.020Z] Copying: 179/256 [MB] (23 MBps) [2024-11-26T19:07:54.397Z] Copying: 204/256 [MB] (24 MBps) [2024-11-26T19:07:55.331Z] Copying: 229/256 [MB] (25 MBps) [2024-11-26T19:07:55.331Z] Copying: 253/256 [MB] (23 MBps) [2024-11-26T19:07:55.589Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-26 19:07:55.559757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:24.374 [2024-11-26 19:07:55.577814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.374 [2024-11-26 19:07:55.577926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:24.374 [2024-11-26 19:07:55.577973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:24.374 [2024-11-26 19:07:55.577988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.374 [2024-11-26 19:07:55.578032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:24.374 [2024-11-26 19:07:55.582308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.374 [2024-11-26 19:07:55.582423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:24.374 [2024-11-26 19:07:55.582447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.234 ms 00:23:24.374 [2024-11-26 19:07:55.582461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.374 [2024-11-26 19:07:55.583829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.374 [2024-11-26 19:07:55.583883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:24.374 [2024-11-26 19:07:55.583905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:23:24.374 [2024-11-26 19:07:55.583919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.590795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.591116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:24.636 [2024-11-26 19:07:55.591153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:23:24.636 [2024-11-26 19:07:55.591190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.602295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.602448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:24.636 [2024-11-26 19:07:55.602497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.983 ms 00:23:24.636 [2024-11-26 19:07:55.602526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.664428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.664560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:24.636 [2024-11-26 19:07:55.664598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.675 ms 00:23:24.636 [2024-11-26 19:07:55.664624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.687932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.688300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:24.636 [2024-11-26 19:07:55.688355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.120 ms 00:23:24.636 [2024-11-26 19:07:55.688372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.688633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.688660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:24.636 [2024-11-26 19:07:55.688694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:23:24.636 [2024-11-26 19:07:55.688709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.728908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.729002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:24.636 [2024-11-26 19:07:55.729026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.163 ms 00:23:24.636 [2024-11-26 19:07:55.729040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.771630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.771964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:24.636 [2024-11-26 19:07:55.772002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.481 ms 00:23:24.636 [2024-11-26 19:07:55.772019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.636 [2024-11-26 19:07:55.814155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.636 [2024-11-26 19:07:55.814258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:24.636 [2024-11-26 19:07:55.814282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.898 ms 00:23:24.636 [2024-11-26 19:07:55.814296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.905 [2024-11-26 19:07:55.854237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.905 [2024-11-26 19:07:55.854342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:24.905 [2024-11-26 19:07:55.854369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.778 ms 00:23:24.905 [2024-11-26 19:07:55.854383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.905 [2024-11-26 19:07:55.854488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:24.905 [2024-11-26 19:07:55.854517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:24.905 [2024-11-26 19:07:55.854772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.854997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.855993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:24.906 [2024-11-26 19:07:55.856104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:24.906 [2024-11-26 19:07:55.856119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: daff057d-85bc-40d2-a74e-43f65c8e8de4 00:23:24.906 [2024-11-26 19:07:55.856133] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:24.906 [2024-11-26 19:07:55.856146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:24.906 [2024-11-26 19:07:55.856159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:24.906 [2024-11-26 19:07:55.856186] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:24.906 [2024-11-26 19:07:55.856201] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:24.906 [2024-11-26 19:07:55.856215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:24.906 [2024-11-26 19:07:55.856235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:24.907 [2024-11-26 19:07:55.856247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:24.907 [2024-11-26 19:07:55.856259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:24.907 [2024-11-26 19:07:55.856273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.907 [2024-11-26 19:07:55.856288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:24.907 [2024-11-26 19:07:55.856303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:23:24.907 [2024-11-26 19:07:55.856316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:55.878259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.907 [2024-11-26 19:07:55.878342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:24.907 [2024-11-26 19:07:55.878365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.890 ms 00:23:24.907 [2024-11-26 19:07:55.878381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:55.879033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.907 [2024-11-26 19:07:55.879072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:24.907 [2024-11-26 19:07:55.879089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:23:24.907 [2024-11-26 19:07:55.879102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:55.935905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.907 [2024-11-26 19:07:55.935982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.907 [2024-11-26 19:07:55.936002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.907 [2024-11-26 19:07:55.936022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:55.936150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.907 [2024-11-26 19:07:55.936196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.907 [2024-11-26 19:07:55.936213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.907 [2024-11-26 19:07:55.936225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:55.936308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.907 [2024-11-26 19:07:55.936326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.907 [2024-11-26 19:07:55.936339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.907 [2024-11-26 19:07:55.936350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:55.936381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.907 [2024-11-26 19:07:55.936395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.907 [2024-11-26 19:07:55.936407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.907 [2024-11-26 19:07:55.936417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.907 [2024-11-26 19:07:56.060831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.907 [2024-11-26 19:07:56.060925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.907 [2024-11-26 19:07:56.060945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.907 [2024-11-26 19:07:56.060957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.148915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.166 [2024-11-26 19:07:56.149027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.166 [2024-11-26 19:07:56.149164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.166 [2024-11-26 19:07:56.149296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.166 [2024-11-26 19:07:56.149473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:25.166 [2024-11-26 19:07:56.149578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.166 [2024-11-26 19:07:56.149672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.166 [2024-11-26 19:07:56.149762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.166 [2024-11-26 19:07:56.149774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.166 [2024-11-26 19:07:56.149785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.166 [2024-11-26 19:07:56.149953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 572.187 ms, result 0 00:23:26.103 00:23:26.103 00:23:26.103 19:07:57 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:26.669 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:26.669 19:07:57 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79002 00:23:26.669 19:07:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79002 ']' 00:23:26.669 19:07:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79002 00:23:26.669 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79002) - No such process 00:23:26.669 Process with pid 79002 is not found 00:23:26.669 19:07:57 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79002 is not found' 00:23:26.669 00:23:26.669 real 1m11.755s 00:23:26.669 user 1m42.934s 00:23:26.669 sys 0m7.698s 00:23:26.669 19:07:57 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.669 ************************************ 00:23:26.669 END TEST ftl_trim 00:23:26.669 19:07:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:26.669 ************************************ 00:23:26.928 19:07:57 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:26.928 19:07:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:26.928 19:07:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.928 19:07:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:26.928 ************************************ 00:23:26.928 START TEST ftl_restore 00:23:26.928 ************************************ 00:23:26.928 19:07:57 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:26.928 * Looking for test storage... 00:23:26.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:26.928 19:07:57 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:26.928 19:07:57 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:23:26.928 19:07:57 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.928 19:07:58 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.928 --rc genhtml_branch_coverage=1 00:23:26.928 --rc genhtml_function_coverage=1 00:23:26.928 --rc genhtml_legend=1 00:23:26.928 --rc geninfo_all_blocks=1 00:23:26.928 --rc geninfo_unexecuted_blocks=1 00:23:26.928 00:23:26.928 ' 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.928 --rc genhtml_branch_coverage=1 00:23:26.928 --rc genhtml_function_coverage=1 00:23:26.928 --rc genhtml_legend=1 00:23:26.928 --rc geninfo_all_blocks=1 00:23:26.928 --rc geninfo_unexecuted_blocks=1 00:23:26.928 00:23:26.928 ' 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.928 --rc genhtml_branch_coverage=1 00:23:26.928 --rc genhtml_function_coverage=1 00:23:26.928 --rc genhtml_legend=1 00:23:26.928 --rc geninfo_all_blocks=1 00:23:26.928 --rc geninfo_unexecuted_blocks=1 00:23:26.928 00:23:26.928 ' 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:26.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.928 --rc genhtml_branch_coverage=1 00:23:26.928 --rc genhtml_function_coverage=1 00:23:26.928 --rc genhtml_legend=1 00:23:26.928 --rc geninfo_all_blocks=1 00:23:26.928 --rc geninfo_unexecuted_blocks=1 00:23:26.928 00:23:26.928 ' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.a2U0VxdMbf 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79272 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:26.928 19:07:58 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79272 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79272 ']' 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.928 19:07:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:27.187 [2024-11-26 19:07:58.266308] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:23:27.187 [2024-11-26 19:07:58.266545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79272 ] 00:23:27.444 [2024-11-26 19:07:58.463609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.444 [2024-11-26 19:07:58.634342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.378 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.378 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:28.378 19:07:59 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:28.378 19:07:59 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:28.378 19:07:59 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:28.378 19:07:59 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:28.378 19:07:59 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:28.378 19:07:59 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:28.945 19:07:59 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:28.945 19:07:59 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:28.945 19:07:59 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:28.945 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:28.945 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:28.945 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:28.945 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:28.945 19:07:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:29.204 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:29.204 { 00:23:29.204 "name": "nvme0n1", 00:23:29.204 "aliases": [ 00:23:29.204 "e6a97623-bcf6-4dbf-be13-b56e0382cc7d" 00:23:29.204 ], 00:23:29.204 "product_name": "NVMe disk", 00:23:29.204 "block_size": 4096, 00:23:29.204 "num_blocks": 1310720, 00:23:29.204 "uuid": "e6a97623-bcf6-4dbf-be13-b56e0382cc7d", 00:23:29.204 "numa_id": -1, 00:23:29.204 "assigned_rate_limits": { 00:23:29.204 "rw_ios_per_sec": 0, 00:23:29.204 "rw_mbytes_per_sec": 0, 00:23:29.204 "r_mbytes_per_sec": 0, 00:23:29.204 "w_mbytes_per_sec": 0 00:23:29.204 }, 00:23:29.204 "claimed": true, 00:23:29.204 "claim_type": "read_many_write_one", 00:23:29.204 "zoned": false, 00:23:29.204 "supported_io_types": { 00:23:29.204 "read": true, 00:23:29.204 "write": true, 00:23:29.204 "unmap": true, 00:23:29.204 "flush": true, 00:23:29.204 "reset": true, 00:23:29.204 "nvme_admin": true, 00:23:29.204 "nvme_io": true, 00:23:29.204 "nvme_io_md": false, 00:23:29.204 "write_zeroes": true, 00:23:29.204 "zcopy": false, 00:23:29.204 "get_zone_info": false, 00:23:29.204 "zone_management": false, 00:23:29.204 "zone_append": false, 00:23:29.204 "compare": true, 00:23:29.204 "compare_and_write": false, 00:23:29.204 "abort": true, 00:23:29.204 "seek_hole": false, 00:23:29.204 "seek_data": false, 00:23:29.204 "copy": true, 00:23:29.204 "nvme_iov_md": false 00:23:29.204 }, 00:23:29.204 "driver_specific": { 00:23:29.204 "nvme": [ 00:23:29.204 { 00:23:29.204 "pci_address": "0000:00:11.0", 00:23:29.204 "trid": { 00:23:29.204 "trtype": "PCIe", 00:23:29.204 "traddr": "0000:00:11.0" 00:23:29.204 }, 00:23:29.204 "ctrlr_data": { 00:23:29.204 "cntlid": 0, 00:23:29.204 "vendor_id": "0x1b36", 00:23:29.204 "model_number": "QEMU NVMe Ctrl", 00:23:29.204 "serial_number": "12341", 00:23:29.204 "firmware_revision": "8.0.0", 00:23:29.204 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:29.204 "oacs": { 00:23:29.204 "security": 0, 00:23:29.204 "format": 1, 00:23:29.204 "firmware": 0, 00:23:29.204 "ns_manage": 1 00:23:29.204 }, 00:23:29.204 "multi_ctrlr": false, 00:23:29.204 "ana_reporting": false 00:23:29.204 }, 00:23:29.204 "vs": { 00:23:29.204 "nvme_version": "1.4" 00:23:29.204 }, 00:23:29.204 "ns_data": { 00:23:29.204 "id": 1, 00:23:29.204 "can_share": false 00:23:29.204 } 00:23:29.204 } 00:23:29.204 ], 00:23:29.204 "mp_policy": "active_passive" 00:23:29.204 } 00:23:29.204 } 00:23:29.204 ]' 00:23:29.204 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:29.204 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:29.204 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:29.464 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:29.464 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:29.464 19:08:00 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:29.464 19:08:00 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:29.464 19:08:00 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:29.464 19:08:00 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:29.464 19:08:00 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:29.464 19:08:00 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:29.724 19:08:00 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=6c971fc8-cd1b-48b9-8e41-2039a9e3f39c 00:23:29.724 19:08:00 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:29.724 19:08:00 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c971fc8-cd1b-48b9-8e41-2039a9e3f39c 00:23:30.287 19:08:01 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:30.545 19:08:01 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b8d4e7ea-7f42-4def-abe8-59264bf90b77 00:23:30.545 19:08:01 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b8d4e7ea-7f42-4def-abe8-59264bf90b77 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8740ef38-169e-4950-bc57-77050c8dfd99 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8740ef38-169e-4950-bc57-77050c8dfd99 00:23:30.802 19:08:01 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:30.803 19:08:01 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:30.803 19:08:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8740ef38-169e-4950-bc57-77050c8dfd99 00:23:30.803 19:08:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:30.803 19:08:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:30.803 19:08:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:30.803 19:08:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:31.062 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:31.063 { 00:23:31.063 "name": "8740ef38-169e-4950-bc57-77050c8dfd99", 00:23:31.063 "aliases": [ 00:23:31.063 "lvs/nvme0n1p0" 00:23:31.063 ], 00:23:31.063 "product_name": "Logical Volume", 00:23:31.063 "block_size": 4096, 00:23:31.063 "num_blocks": 26476544, 00:23:31.063 "uuid": "8740ef38-169e-4950-bc57-77050c8dfd99", 00:23:31.063 "assigned_rate_limits": { 00:23:31.063 "rw_ios_per_sec": 0, 00:23:31.063 "rw_mbytes_per_sec": 0, 00:23:31.063 "r_mbytes_per_sec": 0, 00:23:31.063 "w_mbytes_per_sec": 0 00:23:31.063 }, 00:23:31.063 "claimed": false, 00:23:31.063 "zoned": false, 00:23:31.063 "supported_io_types": { 00:23:31.063 "read": true, 00:23:31.063 "write": true, 00:23:31.063 "unmap": true, 00:23:31.063 "flush": false, 00:23:31.063 "reset": true, 00:23:31.063 "nvme_admin": false, 00:23:31.063 "nvme_io": false, 00:23:31.063 "nvme_io_md": false, 00:23:31.063 "write_zeroes": true, 00:23:31.063 "zcopy": false, 00:23:31.063 "get_zone_info": false, 00:23:31.063 "zone_management": false, 00:23:31.063 "zone_append": false, 00:23:31.063 "compare": false, 00:23:31.063 "compare_and_write": false, 00:23:31.063 "abort": false, 00:23:31.063 "seek_hole": true, 00:23:31.063 "seek_data": true, 00:23:31.063 "copy": false, 00:23:31.063 "nvme_iov_md": false 00:23:31.063 }, 00:23:31.063 "driver_specific": { 00:23:31.063 "lvol": { 00:23:31.063 "lvol_store_uuid": "b8d4e7ea-7f42-4def-abe8-59264bf90b77", 00:23:31.063 "base_bdev": "nvme0n1", 00:23:31.063 "thin_provision": true, 00:23:31.063 "num_allocated_clusters": 0, 00:23:31.063 "snapshot": false, 00:23:31.063 "clone": false, 00:23:31.063 "esnap_clone": false 00:23:31.063 } 00:23:31.063 } 00:23:31.063 } 00:23:31.063 ]' 00:23:31.063 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:31.063 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:31.063 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:31.322 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:31.322 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:31.322 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:31.322 19:08:02 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:31.322 19:08:02 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:31.322 19:08:02 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:31.578 19:08:02 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:31.578 19:08:02 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:31.578 19:08:02 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:31.578 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8740ef38-169e-4950-bc57-77050c8dfd99 00:23:31.578 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:31.578 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:31.578 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:31.578 19:08:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:32.142 { 00:23:32.142 "name": "8740ef38-169e-4950-bc57-77050c8dfd99", 00:23:32.142 "aliases": [ 00:23:32.142 "lvs/nvme0n1p0" 00:23:32.142 ], 00:23:32.142 "product_name": "Logical Volume", 00:23:32.142 "block_size": 4096, 00:23:32.142 "num_blocks": 26476544, 00:23:32.142 "uuid": "8740ef38-169e-4950-bc57-77050c8dfd99", 00:23:32.142 "assigned_rate_limits": { 00:23:32.142 "rw_ios_per_sec": 0, 00:23:32.142 "rw_mbytes_per_sec": 0, 00:23:32.142 "r_mbytes_per_sec": 0, 00:23:32.142 "w_mbytes_per_sec": 0 00:23:32.142 }, 00:23:32.142 "claimed": false, 00:23:32.142 "zoned": false, 00:23:32.142 "supported_io_types": { 00:23:32.142 "read": true, 00:23:32.142 "write": true, 00:23:32.142 "unmap": true, 00:23:32.142 "flush": false, 00:23:32.142 "reset": true, 00:23:32.142 "nvme_admin": false, 00:23:32.142 "nvme_io": false, 00:23:32.142 "nvme_io_md": false, 00:23:32.142 "write_zeroes": true, 00:23:32.142 "zcopy": false, 00:23:32.142 "get_zone_info": false, 00:23:32.142 "zone_management": false, 00:23:32.142 "zone_append": false, 00:23:32.142 "compare": false, 00:23:32.142 "compare_and_write": false, 00:23:32.142 "abort": false, 00:23:32.142 "seek_hole": true, 00:23:32.142 "seek_data": true, 00:23:32.142 "copy": false, 00:23:32.142 "nvme_iov_md": false 00:23:32.142 }, 00:23:32.142 "driver_specific": { 00:23:32.142 "lvol": { 00:23:32.142 "lvol_store_uuid": "b8d4e7ea-7f42-4def-abe8-59264bf90b77", 00:23:32.142 "base_bdev": "nvme0n1", 00:23:32.142 "thin_provision": true, 00:23:32.142 "num_allocated_clusters": 0, 00:23:32.142 "snapshot": false, 00:23:32.142 "clone": false, 00:23:32.142 "esnap_clone": false 00:23:32.142 } 00:23:32.142 } 00:23:32.142 } 00:23:32.142 ]' 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:32.142 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:32.142 19:08:03 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:32.142 19:08:03 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:32.400 19:08:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:32.400 19:08:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:32.400 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8740ef38-169e-4950-bc57-77050c8dfd99 00:23:32.400 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:32.400 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:32.400 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:32.400 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8740ef38-169e-4950-bc57-77050c8dfd99 00:23:32.658 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:32.658 { 00:23:32.658 "name": "8740ef38-169e-4950-bc57-77050c8dfd99", 00:23:32.658 "aliases": [ 00:23:32.658 "lvs/nvme0n1p0" 00:23:32.658 ], 00:23:32.658 "product_name": "Logical Volume", 00:23:32.658 "block_size": 4096, 00:23:32.658 "num_blocks": 26476544, 00:23:32.658 "uuid": "8740ef38-169e-4950-bc57-77050c8dfd99", 00:23:32.658 "assigned_rate_limits": { 00:23:32.658 "rw_ios_per_sec": 0, 00:23:32.658 "rw_mbytes_per_sec": 0, 00:23:32.658 "r_mbytes_per_sec": 0, 00:23:32.658 "w_mbytes_per_sec": 0 00:23:32.658 }, 00:23:32.658 "claimed": false, 00:23:32.658 "zoned": false, 00:23:32.658 "supported_io_types": { 00:23:32.658 "read": true, 00:23:32.658 "write": true, 00:23:32.658 "unmap": true, 00:23:32.658 "flush": false, 00:23:32.658 "reset": true, 00:23:32.658 "nvme_admin": false, 00:23:32.658 "nvme_io": false, 00:23:32.658 "nvme_io_md": false, 00:23:32.658 "write_zeroes": true, 00:23:32.658 "zcopy": false, 00:23:32.658 "get_zone_info": false, 00:23:32.658 "zone_management": false, 00:23:32.658 "zone_append": false, 00:23:32.658 "compare": false, 00:23:32.658 "compare_and_write": false, 00:23:32.658 "abort": false, 00:23:32.658 "seek_hole": true, 00:23:32.658 "seek_data": true, 00:23:32.658 "copy": false, 00:23:32.658 "nvme_iov_md": false 00:23:32.658 }, 00:23:32.658 "driver_specific": { 00:23:32.658 "lvol": { 00:23:32.658 "lvol_store_uuid": "b8d4e7ea-7f42-4def-abe8-59264bf90b77", 00:23:32.658 "base_bdev": "nvme0n1", 00:23:32.658 "thin_provision": true, 00:23:32.659 "num_allocated_clusters": 0, 00:23:32.659 "snapshot": false, 00:23:32.659 "clone": false, 00:23:32.659 "esnap_clone": false 00:23:32.659 } 00:23:32.659 } 00:23:32.659 } 00:23:32.659 ]' 00:23:32.659 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:32.916 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:32.916 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:32.916 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:32.916 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:32.916 19:08:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8740ef38-169e-4950-bc57-77050c8dfd99 --l2p_dram_limit 10' 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:32.916 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:32.916 19:08:03 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8740ef38-169e-4950-bc57-77050c8dfd99 --l2p_dram_limit 10 -c nvc0n1p0 00:23:33.174 [2024-11-26 19:08:04.186713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.186781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:33.174 [2024-11-26 19:08:04.186806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:33.174 [2024-11-26 19:08:04.186819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.186944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.186968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.174 [2024-11-26 19:08:04.186985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:33.174 [2024-11-26 19:08:04.186997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.187033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:33.174 [2024-11-26 19:08:04.188058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:33.174 [2024-11-26 19:08:04.188115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.188133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.174 [2024-11-26 19:08:04.188148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:23:33.174 [2024-11-26 19:08:04.188160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.188368] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 61cf15d5-2808-44b7-8dd9-1f70723e1c5d 00:23:33.174 [2024-11-26 19:08:04.189459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.189502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:33.174 [2024-11-26 19:08:04.189519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:33.174 [2024-11-26 19:08:04.189536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.194607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.194705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.174 [2024-11-26 19:08:04.194726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.003 ms 00:23:33.174 [2024-11-26 19:08:04.194743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.194904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.194929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.174 [2024-11-26 19:08:04.194943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:33.174 [2024-11-26 19:08:04.194962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.195064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.195102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:33.174 [2024-11-26 19:08:04.195121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:33.174 [2024-11-26 19:08:04.195135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.195192] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:33.174 [2024-11-26 19:08:04.201227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.201356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.174 [2024-11-26 19:08:04.201403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.054 ms 00:23:33.174 [2024-11-26 19:08:04.201428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.201519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.201538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:33.174 [2024-11-26 19:08:04.201553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:33.174 [2024-11-26 19:08:04.201566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.201653] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:33.174 [2024-11-26 19:08:04.201833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:33.174 [2024-11-26 19:08:04.201873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:33.174 [2024-11-26 19:08:04.201892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:33.174 [2024-11-26 19:08:04.201911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:33.174 [2024-11-26 19:08:04.201926] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:33.174 [2024-11-26 19:08:04.201942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:33.174 [2024-11-26 19:08:04.201956] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:33.174 [2024-11-26 19:08:04.201970] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:33.174 [2024-11-26 19:08:04.201981] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:33.174 [2024-11-26 19:08:04.201996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.174 [2024-11-26 19:08:04.202021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:33.174 [2024-11-26 19:08:04.202037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:23:33.174 [2024-11-26 19:08:04.202048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.174 [2024-11-26 19:08:04.202152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.175 [2024-11-26 19:08:04.202195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:33.175 [2024-11-26 19:08:04.202213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:33.175 [2024-11-26 19:08:04.202225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.175 [2024-11-26 19:08:04.202353] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:33.175 [2024-11-26 19:08:04.202372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:33.175 [2024-11-26 19:08:04.202388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:33.175 [2024-11-26 19:08:04.202425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:33.175 [2024-11-26 19:08:04.202464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.175 [2024-11-26 19:08:04.202489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:33.175 [2024-11-26 19:08:04.202500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:33.175 [2024-11-26 19:08:04.202514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.175 [2024-11-26 19:08:04.202525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:33.175 [2024-11-26 19:08:04.202538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:33.175 [2024-11-26 19:08:04.202549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:33.175 [2024-11-26 19:08:04.202576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:33.175 [2024-11-26 19:08:04.202616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:33.175 [2024-11-26 19:08:04.202652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:33.175 [2024-11-26 19:08:04.202689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:33.175 [2024-11-26 19:08:04.202736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:33.175 [2024-11-26 19:08:04.202779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.175 [2024-11-26 19:08:04.202804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:33.175 [2024-11-26 19:08:04.202815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:33.175 [2024-11-26 19:08:04.202829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.175 [2024-11-26 19:08:04.202841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:33.175 [2024-11-26 19:08:04.202854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:33.175 [2024-11-26 19:08:04.202865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:33.175 [2024-11-26 19:08:04.202889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:33.175 [2024-11-26 19:08:04.202908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.202925] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:33.175 [2024-11-26 19:08:04.202946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:33.175 [2024-11-26 19:08:04.202965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.175 [2024-11-26 19:08:04.202990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.175 [2024-11-26 19:08:04.203010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:33.175 [2024-11-26 19:08:04.203036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:33.175 [2024-11-26 19:08:04.203064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:33.175 [2024-11-26 19:08:04.203089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:33.175 [2024-11-26 19:08:04.203110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:33.175 [2024-11-26 19:08:04.203132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:33.175 [2024-11-26 19:08:04.203159] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:33.175 [2024-11-26 19:08:04.203212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:33.175 [2024-11-26 19:08:04.203251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:33.175 [2024-11-26 19:08:04.203264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:33.175 [2024-11-26 19:08:04.203277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:33.175 [2024-11-26 19:08:04.203289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:33.175 [2024-11-26 19:08:04.203303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:33.175 [2024-11-26 19:08:04.203315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:33.175 [2024-11-26 19:08:04.203329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:33.175 [2024-11-26 19:08:04.203341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:33.175 [2024-11-26 19:08:04.203356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:33.175 [2024-11-26 19:08:04.203421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:33.175 [2024-11-26 19:08:04.203437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:33.175 [2024-11-26 19:08:04.203463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:33.175 [2024-11-26 19:08:04.203477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:33.175 [2024-11-26 19:08:04.203491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:33.175 [2024-11-26 19:08:04.203505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.175 [2024-11-26 19:08:04.203519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:33.175 [2024-11-26 19:08:04.203533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:23:33.175 [2024-11-26 19:08:04.203547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.175 [2024-11-26 19:08:04.203624] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:33.175 [2024-11-26 19:08:04.203655] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:35.136 [2024-11-26 19:08:06.253937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.254035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:35.136 [2024-11-26 19:08:06.254066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2050.321 ms 00:23:35.136 [2024-11-26 19:08:06.254084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.287256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.287337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.136 [2024-11-26 19:08:06.287361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.827 ms 00:23:35.136 [2024-11-26 19:08:06.287377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.287581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.287607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:35.136 [2024-11-26 19:08:06.287622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:35.136 [2024-11-26 19:08:06.287710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.328788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.328870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.136 [2024-11-26 19:08:06.328892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.014 ms 00:23:35.136 [2024-11-26 19:08:06.328918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.328988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.329008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.136 [2024-11-26 19:08:06.329022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:35.136 [2024-11-26 19:08:06.329050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.329495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.329530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.136 [2024-11-26 19:08:06.329545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:23:35.136 [2024-11-26 19:08:06.329560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.329700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.329721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.136 [2024-11-26 19:08:06.329734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:35.136 [2024-11-26 19:08:06.329750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.136 [2024-11-26 19:08:06.348583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.136 [2024-11-26 19:08:06.348655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.136 [2024-11-26 19:08:06.348678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.803 ms 00:23:35.136 [2024-11-26 19:08:06.348694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.395 [2024-11-26 19:08:06.375332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:35.395 [2024-11-26 19:08:06.378613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.395 [2024-11-26 19:08:06.378681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:35.395 [2024-11-26 19:08:06.378708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.755 ms 00:23:35.395 [2024-11-26 19:08:06.378725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.395 [2024-11-26 19:08:06.441639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.395 [2024-11-26 19:08:06.441728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:35.395 [2024-11-26 19:08:06.441759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.811 ms 00:23:35.395 [2024-11-26 19:08:06.441775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.395 [2024-11-26 19:08:06.442086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.395 [2024-11-26 19:08:06.442123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:35.395 [2024-11-26 19:08:06.442148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:23:35.395 [2024-11-26 19:08:06.442164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.395 [2024-11-26 19:08:06.482872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.395 [2024-11-26 19:08:06.482959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:35.395 [2024-11-26 19:08:06.482989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.520 ms 00:23:35.395 [2024-11-26 19:08:06.483006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.395 [2024-11-26 19:08:06.522711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.395 [2024-11-26 19:08:06.522811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:35.395 [2024-11-26 19:08:06.522841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.575 ms 00:23:35.395 [2024-11-26 19:08:06.522857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.395 [2024-11-26 19:08:06.523842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.395 [2024-11-26 19:08:06.523884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:35.395 [2024-11-26 19:08:06.523911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:23:35.395 [2024-11-26 19:08:06.523926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.625770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.654 [2024-11-26 19:08:06.625861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:35.654 [2024-11-26 19:08:06.625895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.715 ms 00:23:35.654 [2024-11-26 19:08:06.625912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.666688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.654 [2024-11-26 19:08:06.666775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:35.654 [2024-11-26 19:08:06.666804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.571 ms 00:23:35.654 [2024-11-26 19:08:06.666820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.707159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.654 [2024-11-26 19:08:06.707261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:35.654 [2024-11-26 19:08:06.707291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.149 ms 00:23:35.654 [2024-11-26 19:08:06.707307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.747938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.654 [2024-11-26 19:08:06.748042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:35.654 [2024-11-26 19:08:06.748073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.511 ms 00:23:35.654 [2024-11-26 19:08:06.748089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.748237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.654 [2024-11-26 19:08:06.748273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:35.654 [2024-11-26 19:08:06.748314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:35.654 [2024-11-26 19:08:06.748337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.748589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.654 [2024-11-26 19:08:06.748657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:35.654 [2024-11-26 19:08:06.748696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:35.654 [2024-11-26 19:08:06.748735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.654 [2024-11-26 19:08:06.750251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2562.903 ms, result 0 00:23:35.654 { 00:23:35.654 "name": "ftl0", 00:23:35.654 "uuid": "61cf15d5-2808-44b7-8dd9-1f70723e1c5d" 00:23:35.654 } 00:23:35.654 19:08:06 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:35.654 19:08:06 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:35.914 19:08:07 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:35.914 19:08:07 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:36.193 [2024-11-26 19:08:07.357350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.193 [2024-11-26 19:08:07.357442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.193 [2024-11-26 19:08:07.357465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.193 [2024-11-26 19:08:07.357480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.193 [2024-11-26 19:08:07.357519] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.193 [2024-11-26 19:08:07.361130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.193 [2024-11-26 19:08:07.361205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.193 [2024-11-26 19:08:07.361229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:23:36.193 [2024-11-26 19:08:07.361242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.193 [2024-11-26 19:08:07.361745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.193 [2024-11-26 19:08:07.361785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.193 [2024-11-26 19:08:07.361805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:23:36.193 [2024-11-26 19:08:07.361817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.193 [2024-11-26 19:08:07.365428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.193 [2024-11-26 19:08:07.365484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.193 [2024-11-26 19:08:07.365506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.576 ms 00:23:36.193 [2024-11-26 19:08:07.365519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.193 [2024-11-26 19:08:07.373038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.193 [2024-11-26 19:08:07.373109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:36.193 [2024-11-26 19:08:07.373133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.287 ms 00:23:36.193 [2024-11-26 19:08:07.373146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.193 [2024-11-26 19:08:07.406010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.193 [2024-11-26 19:08:07.406095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.193 [2024-11-26 19:08:07.406122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.648 ms 00:23:36.193 [2024-11-26 19:08:07.406135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.479 [2024-11-26 19:08:07.425579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.479 [2024-11-26 19:08:07.425686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.479 [2024-11-26 19:08:07.425716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.313 ms 00:23:36.479 [2024-11-26 19:08:07.425735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.479 [2024-11-26 19:08:07.426006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.480 [2024-11-26 19:08:07.426030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.480 [2024-11-26 19:08:07.426047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:23:36.480 [2024-11-26 19:08:07.426060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.480 [2024-11-26 19:08:07.459094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.480 [2024-11-26 19:08:07.459188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:36.480 [2024-11-26 19:08:07.459215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.991 ms 00:23:36.480 [2024-11-26 19:08:07.459229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.480 [2024-11-26 19:08:07.492217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.480 [2024-11-26 19:08:07.492300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:36.480 [2024-11-26 19:08:07.492326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.866 ms 00:23:36.480 [2024-11-26 19:08:07.492340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.480 [2024-11-26 19:08:07.524615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.480 [2024-11-26 19:08:07.524704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.480 [2024-11-26 19:08:07.524730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.165 ms 00:23:36.480 [2024-11-26 19:08:07.524743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.480 [2024-11-26 19:08:07.560383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.480 [2024-11-26 19:08:07.560470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.480 [2024-11-26 19:08:07.560495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.408 ms 00:23:36.480 [2024-11-26 19:08:07.560509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.480 [2024-11-26 19:08:07.560614] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.480 [2024-11-26 19:08:07.560642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.560997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.561977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.480 [2024-11-26 19:08:07.562233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.481 [2024-11-26 19:08:07.562951] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.481 [2024-11-26 19:08:07.562980] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 61cf15d5-2808-44b7-8dd9-1f70723e1c5d 00:23:36.481 [2024-11-26 19:08:07.563001] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.481 [2024-11-26 19:08:07.563018] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.481 [2024-11-26 19:08:07.563040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.481 [2024-11-26 19:08:07.563067] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.481 [2024-11-26 19:08:07.563090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.481 [2024-11-26 19:08:07.563118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.481 [2024-11-26 19:08:07.563140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.481 [2024-11-26 19:08:07.563165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.481 [2024-11-26 19:08:07.563199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.481 [2024-11-26 19:08:07.563218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.481 [2024-11-26 19:08:07.563239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.481 [2024-11-26 19:08:07.563266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:23:36.481 [2024-11-26 19:08:07.563294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.481 [2024-11-26 19:08:07.581060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.481 [2024-11-26 19:08:07.581155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.481 [2024-11-26 19:08:07.581201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.601 ms 00:23:36.481 [2024-11-26 19:08:07.581217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.481 [2024-11-26 19:08:07.581809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.481 [2024-11-26 19:08:07.581848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.481 [2024-11-26 19:08:07.581871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:23:36.481 [2024-11-26 19:08:07.581884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.481 [2024-11-26 19:08:07.637695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.481 [2024-11-26 19:08:07.637778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.481 [2024-11-26 19:08:07.637801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.481 [2024-11-26 19:08:07.637815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.481 [2024-11-26 19:08:07.637914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.481 [2024-11-26 19:08:07.637931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.481 [2024-11-26 19:08:07.637950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.481 [2024-11-26 19:08:07.637962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.481 [2024-11-26 19:08:07.638121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.481 [2024-11-26 19:08:07.638152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.481 [2024-11-26 19:08:07.638207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.481 [2024-11-26 19:08:07.638235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.481 [2024-11-26 19:08:07.638291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.481 [2024-11-26 19:08:07.638311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.481 [2024-11-26 19:08:07.638329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.481 [2024-11-26 19:08:07.638352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.743534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.743630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.762 [2024-11-26 19:08:07.743656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.743669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.831339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.831419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.762 [2024-11-26 19:08:07.831448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.831461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.831624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.831644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.762 [2024-11-26 19:08:07.831659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.831671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.831767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.831798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.762 [2024-11-26 19:08:07.831845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.831878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.832095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.832140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.762 [2024-11-26 19:08:07.832206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.832237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.832333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.832377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.762 [2024-11-26 19:08:07.832408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.832432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.832520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.832558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.762 [2024-11-26 19:08:07.832587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.832621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.832721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.762 [2024-11-26 19:08:07.832769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.762 [2024-11-26 19:08:07.832807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.762 [2024-11-26 19:08:07.832830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.762 [2024-11-26 19:08:07.833088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.668 ms, result 0 00:23:36.762 true 00:23:36.762 19:08:07 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79272 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79272 ']' 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79272 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79272 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.762 killing process with pid 79272 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79272' 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79272 00:23:36.762 19:08:07 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79272 00:23:40.044 19:08:11 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:45.307 262144+0 records in 00:23:45.307 262144+0 records out 00:23:45.307 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.18355 s, 207 MB/s 00:23:45.307 19:08:16 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:47.845 19:08:18 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:47.845 [2024-11-26 19:08:18.689593] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:23:47.845 [2024-11-26 19:08:18.689831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79526 ] 00:23:47.845 [2024-11-26 19:08:18.871624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.845 [2024-11-26 19:08:18.978838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.413 [2024-11-26 19:08:19.326461] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:48.413 [2024-11-26 19:08:19.326568] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:48.413 [2024-11-26 19:08:19.497662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.497750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:48.413 [2024-11-26 19:08:19.497771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:48.413 [2024-11-26 19:08:19.497784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.497877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.497902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.413 [2024-11-26 19:08:19.497915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:48.413 [2024-11-26 19:08:19.497927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.497960] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:48.413 [2024-11-26 19:08:19.498973] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:48.413 [2024-11-26 19:08:19.499015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.499029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.413 [2024-11-26 19:08:19.499042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:23:48.413 [2024-11-26 19:08:19.499053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.500466] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:48.413 [2024-11-26 19:08:19.518905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.519001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:48.413 [2024-11-26 19:08:19.519022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.433 ms 00:23:48.413 [2024-11-26 19:08:19.519035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.519243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.519265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:48.413 [2024-11-26 19:08:19.519279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:48.413 [2024-11-26 19:08:19.519290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.524504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.524582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.413 [2024-11-26 19:08:19.524600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.051 ms 00:23:48.413 [2024-11-26 19:08:19.524639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.524800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.524822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.413 [2024-11-26 19:08:19.524836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:23:48.413 [2024-11-26 19:08:19.524847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.524930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.524954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:48.413 [2024-11-26 19:08:19.524967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:48.413 [2024-11-26 19:08:19.524978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.525028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:48.413 [2024-11-26 19:08:19.529494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.529557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.413 [2024-11-26 19:08:19.529586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:23:48.413 [2024-11-26 19:08:19.529598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.529669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.529686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:48.413 [2024-11-26 19:08:19.529698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:48.413 [2024-11-26 19:08:19.529709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.529800] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:48.413 [2024-11-26 19:08:19.529843] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:48.413 [2024-11-26 19:08:19.529890] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:48.413 [2024-11-26 19:08:19.529926] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:48.413 [2024-11-26 19:08:19.530043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:48.413 [2024-11-26 19:08:19.530067] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:48.413 [2024-11-26 19:08:19.530083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:48.413 [2024-11-26 19:08:19.530098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:48.413 [2024-11-26 19:08:19.530112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:48.413 [2024-11-26 19:08:19.530125] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:48.413 [2024-11-26 19:08:19.530137] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:48.413 [2024-11-26 19:08:19.530158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:48.413 [2024-11-26 19:08:19.530190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:48.413 [2024-11-26 19:08:19.530206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.530218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:48.413 [2024-11-26 19:08:19.530230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:23:48.413 [2024-11-26 19:08:19.530241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.413 [2024-11-26 19:08:19.530343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.413 [2024-11-26 19:08:19.530368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:48.413 [2024-11-26 19:08:19.530382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:48.413 [2024-11-26 19:08:19.530393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.414 [2024-11-26 19:08:19.530532] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:48.414 [2024-11-26 19:08:19.530555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:48.414 [2024-11-26 19:08:19.530568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:48.414 [2024-11-26 19:08:19.530602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:48.414 [2024-11-26 19:08:19.530635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:48.414 [2024-11-26 19:08:19.530656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:48.414 [2024-11-26 19:08:19.530667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:48.414 [2024-11-26 19:08:19.530678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:48.414 [2024-11-26 19:08:19.530709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:48.414 [2024-11-26 19:08:19.530720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:48.414 [2024-11-26 19:08:19.530730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:48.414 [2024-11-26 19:08:19.530752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:48.414 [2024-11-26 19:08:19.530784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:48.414 [2024-11-26 19:08:19.530818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:48.414 [2024-11-26 19:08:19.530849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:48.414 [2024-11-26 19:08:19.530880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.414 [2024-11-26 19:08:19.530901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:48.414 [2024-11-26 19:08:19.530912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:48.414 [2024-11-26 19:08:19.530933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:48.414 [2024-11-26 19:08:19.530943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:48.414 [2024-11-26 19:08:19.530954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:48.414 [2024-11-26 19:08:19.530964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:48.414 [2024-11-26 19:08:19.530974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:48.414 [2024-11-26 19:08:19.530985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.530995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:48.414 [2024-11-26 19:08:19.531005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:48.414 [2024-11-26 19:08:19.531016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.531026] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:48.414 [2024-11-26 19:08:19.531038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:48.414 [2024-11-26 19:08:19.531049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:48.414 [2024-11-26 19:08:19.531060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.414 [2024-11-26 19:08:19.531071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:48.414 [2024-11-26 19:08:19.531082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:48.414 [2024-11-26 19:08:19.531093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:48.414 [2024-11-26 19:08:19.531103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:48.414 [2024-11-26 19:08:19.531113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:48.414 [2024-11-26 19:08:19.531124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:48.414 [2024-11-26 19:08:19.531137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:48.414 [2024-11-26 19:08:19.531152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:48.414 [2024-11-26 19:08:19.531223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:48.414 [2024-11-26 19:08:19.531234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:48.414 [2024-11-26 19:08:19.531246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:48.414 [2024-11-26 19:08:19.531257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:48.414 [2024-11-26 19:08:19.531268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:48.414 [2024-11-26 19:08:19.531279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:48.414 [2024-11-26 19:08:19.531290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:48.414 [2024-11-26 19:08:19.531301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:48.414 [2024-11-26 19:08:19.531313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:48.414 [2024-11-26 19:08:19.531369] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:48.414 [2024-11-26 19:08:19.531381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:48.414 [2024-11-26 19:08:19.531404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:48.415 [2024-11-26 19:08:19.531416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:48.415 [2024-11-26 19:08:19.531427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:48.415 [2024-11-26 19:08:19.531440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.415 [2024-11-26 19:08:19.531452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:48.415 [2024-11-26 19:08:19.531464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:23:48.415 [2024-11-26 19:08:19.531474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.415 [2024-11-26 19:08:19.566827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.415 [2024-11-26 19:08:19.566911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.415 [2024-11-26 19:08:19.566931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.278 ms 00:23:48.415 [2024-11-26 19:08:19.566956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.415 [2024-11-26 19:08:19.567077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.415 [2024-11-26 19:08:19.567093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:48.415 [2024-11-26 19:08:19.567106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:48.415 [2024-11-26 19:08:19.567117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.674 [2024-11-26 19:08:19.626097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.674 [2024-11-26 19:08:19.626199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.674 [2024-11-26 19:08:19.626223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.823 ms 00:23:48.674 [2024-11-26 19:08:19.626236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.674 [2024-11-26 19:08:19.626327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.674 [2024-11-26 19:08:19.626345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.674 [2024-11-26 19:08:19.626366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:48.674 [2024-11-26 19:08:19.626378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.674 [2024-11-26 19:08:19.626833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.674 [2024-11-26 19:08:19.626865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.674 [2024-11-26 19:08:19.626880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:23:48.674 [2024-11-26 19:08:19.626891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.674 [2024-11-26 19:08:19.627057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.674 [2024-11-26 19:08:19.627087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.674 [2024-11-26 19:08:19.627108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:48.674 [2024-11-26 19:08:19.627120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.674 [2024-11-26 19:08:19.645464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.674 [2024-11-26 19:08:19.645557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.674 [2024-11-26 19:08:19.645577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.312 ms 00:23:48.674 [2024-11-26 19:08:19.645590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.662710] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:48.675 [2024-11-26 19:08:19.662811] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:48.675 [2024-11-26 19:08:19.662834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.662848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:48.675 [2024-11-26 19:08:19.662863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.038 ms 00:23:48.675 [2024-11-26 19:08:19.662875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.696766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.696890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:48.675 [2024-11-26 19:08:19.696913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.780 ms 00:23:48.675 [2024-11-26 19:08:19.696926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.713817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.713913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:48.675 [2024-11-26 19:08:19.713933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.810 ms 00:23:48.675 [2024-11-26 19:08:19.713947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.730555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.730646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:48.675 [2024-11-26 19:08:19.730668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.506 ms 00:23:48.675 [2024-11-26 19:08:19.730679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.731683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.731724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:48.675 [2024-11-26 19:08:19.731740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:23:48.675 [2024-11-26 19:08:19.731758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.824861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.824987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:48.675 [2024-11-26 19:08:19.825022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.065 ms 00:23:48.675 [2024-11-26 19:08:19.825065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.841789] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:48.675 [2024-11-26 19:08:19.844798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.844862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:48.675 [2024-11-26 19:08:19.844882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.463 ms 00:23:48.675 [2024-11-26 19:08:19.844894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.845051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.845075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:48.675 [2024-11-26 19:08:19.845089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:48.675 [2024-11-26 19:08:19.845101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.845248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.845279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:48.675 [2024-11-26 19:08:19.845294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:48.675 [2024-11-26 19:08:19.845305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.845340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.845355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:48.675 [2024-11-26 19:08:19.845367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.675 [2024-11-26 19:08:19.845379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.845422] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:48.675 [2024-11-26 19:08:19.845443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.845454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:48.675 [2024-11-26 19:08:19.845466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:48.675 [2024-11-26 19:08:19.845477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.878807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.878892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:48.675 [2024-11-26 19:08:19.878914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.298 ms 00:23:48.675 [2024-11-26 19:08:19.878942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.879108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.675 [2024-11-26 19:08:19.879129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:48.675 [2024-11-26 19:08:19.879143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:48.675 [2024-11-26 19:08:19.879154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.675 [2024-11-26 19:08:19.880561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.328 ms, result 0 00:23:50.052  [2024-11-26T19:08:22.202Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-26T19:08:23.137Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-26T19:08:24.072Z] Copying: 85/1024 [MB] (26 MBps) [2024-11-26T19:08:25.007Z] Copying: 117/1024 [MB] (31 MBps) [2024-11-26T19:08:25.941Z] Copying: 147/1024 [MB] (29 MBps) [2024-11-26T19:08:27.317Z] Copying: 177/1024 [MB] (30 MBps) [2024-11-26T19:08:28.251Z] Copying: 208/1024 [MB] (30 MBps) [2024-11-26T19:08:29.185Z] Copying: 239/1024 [MB] (31 MBps) [2024-11-26T19:08:30.119Z] Copying: 270/1024 [MB] (31 MBps) [2024-11-26T19:08:31.054Z] Copying: 303/1024 [MB] (32 MBps) [2024-11-26T19:08:31.989Z] Copying: 336/1024 [MB] (33 MBps) [2024-11-26T19:08:32.924Z] Copying: 369/1024 [MB] (33 MBps) [2024-11-26T19:08:34.298Z] Copying: 401/1024 [MB] (32 MBps) [2024-11-26T19:08:35.233Z] Copying: 432/1024 [MB] (30 MBps) [2024-11-26T19:08:36.166Z] Copying: 463/1024 [MB] (31 MBps) [2024-11-26T19:08:37.101Z] Copying: 493/1024 [MB] (29 MBps) [2024-11-26T19:08:38.034Z] Copying: 522/1024 [MB] (28 MBps) [2024-11-26T19:08:38.966Z] Copying: 552/1024 [MB] (30 MBps) [2024-11-26T19:08:39.899Z] Copying: 584/1024 [MB] (31 MBps) [2024-11-26T19:08:41.272Z] Copying: 615/1024 [MB] (31 MBps) [2024-11-26T19:08:42.204Z] Copying: 646/1024 [MB] (31 MBps) [2024-11-26T19:08:43.138Z] Copying: 678/1024 [MB] (31 MBps) [2024-11-26T19:08:44.075Z] Copying: 708/1024 [MB] (29 MBps) [2024-11-26T19:08:45.012Z] Copying: 740/1024 [MB] (31 MBps) [2024-11-26T19:08:45.947Z] Copying: 772/1024 [MB] (31 MBps) [2024-11-26T19:08:47.322Z] Copying: 802/1024 [MB] (30 MBps) [2024-11-26T19:08:47.922Z] Copying: 834/1024 [MB] (31 MBps) [2024-11-26T19:08:49.296Z] Copying: 864/1024 [MB] (30 MBps) [2024-11-26T19:08:50.231Z] Copying: 895/1024 [MB] (31 MBps) [2024-11-26T19:08:51.165Z] Copying: 924/1024 [MB] (29 MBps) [2024-11-26T19:08:52.100Z] Copying: 956/1024 [MB] (31 MBps) [2024-11-26T19:08:53.033Z] Copying: 988/1024 [MB] (31 MBps) [2024-11-26T19:08:53.304Z] Copying: 1019/1024 [MB] (31 MBps) [2024-11-26T19:08:53.304Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-26 19:08:53.041972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.042065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.089 [2024-11-26 19:08:53.042100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:22.089 [2024-11-26 19:08:53.042124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.042197] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.089 [2024-11-26 19:08:53.047274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.047347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.089 [2024-11-26 19:08:53.047390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:24:22.089 [2024-11-26 19:08:53.047412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.048964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.049025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.089 [2024-11-26 19:08:53.049053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:24:22.089 [2024-11-26 19:08:53.049074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.067745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.067867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.089 [2024-11-26 19:08:53.067905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:24:22.089 [2024-11-26 19:08:53.067930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.076709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.076824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.089 [2024-11-26 19:08:53.076859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.636 ms 00:24:22.089 [2024-11-26 19:08:53.076884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.124947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.125075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.089 [2024-11-26 19:08:53.125111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.886 ms 00:24:22.089 [2024-11-26 19:08:53.125135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.151326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.151444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:22.089 [2024-11-26 19:08:53.151478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.031 ms 00:24:22.089 [2024-11-26 19:08:53.151499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.151763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.151812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.089 [2024-11-26 19:08:53.151835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:24:22.089 [2024-11-26 19:08:53.151854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.200355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.200471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:22.089 [2024-11-26 19:08:53.200504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.455 ms 00:24:22.089 [2024-11-26 19:08:53.200529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.248628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.248733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:22.089 [2024-11-26 19:08:53.248767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.961 ms 00:24:22.089 [2024-11-26 19:08:53.248785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.089 [2024-11-26 19:08:53.281715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.089 [2024-11-26 19:08:53.281816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.089 [2024-11-26 19:08:53.281851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.814 ms 00:24:22.089 [2024-11-26 19:08:53.281869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.348 [2024-11-26 19:08:53.314986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.348 [2024-11-26 19:08:53.315076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.348 [2024-11-26 19:08:53.315106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.737 ms 00:24:22.348 [2024-11-26 19:08:53.315132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.348 [2024-11-26 19:08:53.315247] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.348 [2024-11-26 19:08:53.315288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.348 [2024-11-26 19:08:53.315938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.315960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.315981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.316992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.349 [2024-11-26 19:08:53.317509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.349 [2024-11-26 19:08:53.317540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 61cf15d5-2808-44b7-8dd9-1f70723e1c5d 00:24:22.349 [2024-11-26 19:08:53.317561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:22.349 [2024-11-26 19:08:53.317579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:22.349 [2024-11-26 19:08:53.317599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:22.349 [2024-11-26 19:08:53.317621] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:22.349 [2024-11-26 19:08:53.317640] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.349 [2024-11-26 19:08:53.317682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.349 [2024-11-26 19:08:53.317702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.349 [2024-11-26 19:08:53.317721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.349 [2024-11-26 19:08:53.317739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.349 [2024-11-26 19:08:53.317762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.349 [2024-11-26 19:08:53.317781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.349 [2024-11-26 19:08:53.317805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:24:22.349 [2024-11-26 19:08:53.317825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.336793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.350 [2024-11-26 19:08:53.336874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.350 [2024-11-26 19:08:53.336906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.853 ms 00:24:22.350 [2024-11-26 19:08:53.336923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.337593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.350 [2024-11-26 19:08:53.337637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.350 [2024-11-26 19:08:53.337666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:24:22.350 [2024-11-26 19:08:53.337705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.381854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.350 [2024-11-26 19:08:53.381938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.350 [2024-11-26 19:08:53.381970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.350 [2024-11-26 19:08:53.381987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.382104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.350 [2024-11-26 19:08:53.382130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.350 [2024-11-26 19:08:53.382148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.350 [2024-11-26 19:08:53.382196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.382411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.350 [2024-11-26 19:08:53.382442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.350 [2024-11-26 19:08:53.382465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.350 [2024-11-26 19:08:53.382485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.382524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.350 [2024-11-26 19:08:53.382546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.350 [2024-11-26 19:08:53.382567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.350 [2024-11-26 19:08:53.382586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.350 [2024-11-26 19:08:53.487541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.350 [2024-11-26 19:08:53.487640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.350 [2024-11-26 19:08:53.487670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.350 [2024-11-26 19:08:53.487689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.574038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.574118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.608 [2024-11-26 19:08:53.574147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.574205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.574358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.574387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.608 [2024-11-26 19:08:53.574407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.574426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.574497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.574524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.608 [2024-11-26 19:08:53.574544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.574563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.574752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.574784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.608 [2024-11-26 19:08:53.574805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.574824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.574903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.574932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.608 [2024-11-26 19:08:53.574954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.574972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.575039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.575073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.608 [2024-11-26 19:08:53.575094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.575113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.575207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.608 [2024-11-26 19:08:53.575236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.608 [2024-11-26 19:08:53.575256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.608 [2024-11-26 19:08:53.575276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.608 [2024-11-26 19:08:53.575491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.469 ms, result 0 00:24:23.542 00:24:23.542 00:24:23.542 19:08:54 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:23.542 [2024-11-26 19:08:54.690282] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:24:23.542 [2024-11-26 19:08:54.690503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79874 ] 00:24:23.800 [2024-11-26 19:08:54.870116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.800 [2024-11-26 19:08:54.972290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.368 [2024-11-26 19:08:55.297914] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.368 [2024-11-26 19:08:55.298003] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.368 [2024-11-26 19:08:55.459646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.368 [2024-11-26 19:08:55.459725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:24.368 [2024-11-26 19:08:55.459745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:24.368 [2024-11-26 19:08:55.459758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.368 [2024-11-26 19:08:55.459840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.368 [2024-11-26 19:08:55.459862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.368 [2024-11-26 19:08:55.459875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:24.368 [2024-11-26 19:08:55.459887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.368 [2024-11-26 19:08:55.459920] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:24.368 [2024-11-26 19:08:55.460946] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:24.368 [2024-11-26 19:08:55.460986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.368 [2024-11-26 19:08:55.461000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.368 [2024-11-26 19:08:55.461013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:24:24.368 [2024-11-26 19:08:55.461024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.368 [2024-11-26 19:08:55.462204] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:24.368 [2024-11-26 19:08:55.479920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.368 [2024-11-26 19:08:55.480002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:24.368 [2024-11-26 19:08:55.480023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:24:24.368 [2024-11-26 19:08:55.480037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.368 [2024-11-26 19:08:55.480199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.480222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:24.369 [2024-11-26 19:08:55.480235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:24.369 [2024-11-26 19:08:55.480245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.485079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.485148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.369 [2024-11-26 19:08:55.485167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.690 ms 00:24:24.369 [2024-11-26 19:08:55.485204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.485331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.485351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.369 [2024-11-26 19:08:55.485364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:24.369 [2024-11-26 19:08:55.485375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.485459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.485477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:24.369 [2024-11-26 19:08:55.485490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:24.369 [2024-11-26 19:08:55.485501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.485543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:24.369 [2024-11-26 19:08:55.489900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.489953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.369 [2024-11-26 19:08:55.489976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:24:24.369 [2024-11-26 19:08:55.489988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.490037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.490052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:24.369 [2024-11-26 19:08:55.490064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:24.369 [2024-11-26 19:08:55.490075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.490136] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:24.369 [2024-11-26 19:08:55.490167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:24.369 [2024-11-26 19:08:55.490228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:24.369 [2024-11-26 19:08:55.490253] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:24.369 [2024-11-26 19:08:55.490367] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:24.369 [2024-11-26 19:08:55.490389] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:24.369 [2024-11-26 19:08:55.490405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:24.369 [2024-11-26 19:08:55.490420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:24.369 [2024-11-26 19:08:55.490434] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:24.369 [2024-11-26 19:08:55.490446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:24.369 [2024-11-26 19:08:55.490457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:24.369 [2024-11-26 19:08:55.490472] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:24.369 [2024-11-26 19:08:55.490483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:24.369 [2024-11-26 19:08:55.490494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.490505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:24.369 [2024-11-26 19:08:55.490518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:24:24.369 [2024-11-26 19:08:55.490528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.490631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.369 [2024-11-26 19:08:55.490646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:24.369 [2024-11-26 19:08:55.490658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:24.369 [2024-11-26 19:08:55.490669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.369 [2024-11-26 19:08:55.490823] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:24.369 [2024-11-26 19:08:55.490844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:24.369 [2024-11-26 19:08:55.490857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.369 [2024-11-26 19:08:55.490868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.490880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:24.369 [2024-11-26 19:08:55.490890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.490901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:24.369 [2024-11-26 19:08:55.490911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:24.369 [2024-11-26 19:08:55.490929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:24.369 [2024-11-26 19:08:55.490948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.369 [2024-11-26 19:08:55.490967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:24.369 [2024-11-26 19:08:55.490979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:24.369 [2024-11-26 19:08:55.490989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.369 [2024-11-26 19:08:55.491014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:24.369 [2024-11-26 19:08:55.491026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:24.369 [2024-11-26 19:08:55.491036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:24.369 [2024-11-26 19:08:55.491057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:24.369 [2024-11-26 19:08:55.491089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:24.369 [2024-11-26 19:08:55.491119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:24.369 [2024-11-26 19:08:55.491149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:24.369 [2024-11-26 19:08:55.491197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:24.369 [2024-11-26 19:08:55.491229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.369 [2024-11-26 19:08:55.491249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:24.369 [2024-11-26 19:08:55.491259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:24.369 [2024-11-26 19:08:55.491270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.369 [2024-11-26 19:08:55.491280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:24.369 [2024-11-26 19:08:55.491290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:24.369 [2024-11-26 19:08:55.491300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:24.369 [2024-11-26 19:08:55.491321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:24.369 [2024-11-26 19:08:55.491331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491340] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:24.369 [2024-11-26 19:08:55.491352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:24.369 [2024-11-26 19:08:55.491362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.369 [2024-11-26 19:08:55.491387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:24.369 [2024-11-26 19:08:55.491397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:24.369 [2024-11-26 19:08:55.491408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:24.369 [2024-11-26 19:08:55.491418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:24.369 [2024-11-26 19:08:55.491428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:24.369 [2024-11-26 19:08:55.491439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:24.369 [2024-11-26 19:08:55.491451] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:24.369 [2024-11-26 19:08:55.491465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.369 [2024-11-26 19:08:55.491482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:24.369 [2024-11-26 19:08:55.491494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:24.369 [2024-11-26 19:08:55.491505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:24.370 [2024-11-26 19:08:55.491516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:24.370 [2024-11-26 19:08:55.491528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:24.370 [2024-11-26 19:08:55.491539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:24.370 [2024-11-26 19:08:55.491550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:24.370 [2024-11-26 19:08:55.491561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:24.370 [2024-11-26 19:08:55.491572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:24.370 [2024-11-26 19:08:55.491583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:24.370 [2024-11-26 19:08:55.491594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:24.370 [2024-11-26 19:08:55.491605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:24.370 [2024-11-26 19:08:55.491630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:24.370 [2024-11-26 19:08:55.491642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:24.370 [2024-11-26 19:08:55.491653] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:24.370 [2024-11-26 19:08:55.491666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.370 [2024-11-26 19:08:55.491677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:24.370 [2024-11-26 19:08:55.491689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:24.370 [2024-11-26 19:08:55.491700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:24.370 [2024-11-26 19:08:55.491712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:24.370 [2024-11-26 19:08:55.491724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.491735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:24.370 [2024-11-26 19:08:55.491747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:24:24.370 [2024-11-26 19:08:55.491758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.370 [2024-11-26 19:08:55.524684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.524756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.370 [2024-11-26 19:08:55.524776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.859 ms 00:24:24.370 [2024-11-26 19:08:55.524794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.370 [2024-11-26 19:08:55.524912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.524928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:24.370 [2024-11-26 19:08:55.524942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:24.370 [2024-11-26 19:08:55.524953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.370 [2024-11-26 19:08:55.573307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.573385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.370 [2024-11-26 19:08:55.573406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.249 ms 00:24:24.370 [2024-11-26 19:08:55.573418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.370 [2024-11-26 19:08:55.573507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.573524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.370 [2024-11-26 19:08:55.573544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:24.370 [2024-11-26 19:08:55.573555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.370 [2024-11-26 19:08:55.573996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.574022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.370 [2024-11-26 19:08:55.574036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:24:24.370 [2024-11-26 19:08:55.574047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.370 [2024-11-26 19:08:55.574224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.370 [2024-11-26 19:08:55.574245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.370 [2024-11-26 19:08:55.574264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:24.370 [2024-11-26 19:08:55.574275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.629 [2024-11-26 19:08:55.591026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.629 [2024-11-26 19:08:55.591098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.629 [2024-11-26 19:08:55.591118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.719 ms 00:24:24.629 [2024-11-26 19:08:55.591130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.629 [2024-11-26 19:08:55.608750] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:24.629 [2024-11-26 19:08:55.608835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:24.629 [2024-11-26 19:08:55.608859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.629 [2024-11-26 19:08:55.608873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:24.629 [2024-11-26 19:08:55.608889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.515 ms 00:24:24.629 [2024-11-26 19:08:55.608900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.629 [2024-11-26 19:08:55.639935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.629 [2024-11-26 19:08:55.640053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:24.629 [2024-11-26 19:08:55.640076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.933 ms 00:24:24.629 [2024-11-26 19:08:55.640088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.629 [2024-11-26 19:08:55.656782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.629 [2024-11-26 19:08:55.656868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:24.629 [2024-11-26 19:08:55.656889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:24:24.629 [2024-11-26 19:08:55.656901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.673239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.673321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:24.630 [2024-11-26 19:08:55.673340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.250 ms 00:24:24.630 [2024-11-26 19:08:55.673352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.674286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.674318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:24.630 [2024-11-26 19:08:55.674338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:24:24.630 [2024-11-26 19:08:55.674350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.768116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.768252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:24.630 [2024-11-26 19:08:55.768308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.719 ms 00:24:24.630 [2024-11-26 19:08:55.768331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.785443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:24.630 [2024-11-26 19:08:55.788877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.788963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:24.630 [2024-11-26 19:08:55.788994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.283 ms 00:24:24.630 [2024-11-26 19:08:55.789013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.789231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.789269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:24.630 [2024-11-26 19:08:55.789299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:24.630 [2024-11-26 19:08:55.789317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.789498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.789527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:24.630 [2024-11-26 19:08:55.789546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:24.630 [2024-11-26 19:08:55.789564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.789613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.789637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:24.630 [2024-11-26 19:08:55.789656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:24.630 [2024-11-26 19:08:55.789672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.789740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:24.630 [2024-11-26 19:08:55.789765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.789782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:24.630 [2024-11-26 19:08:55.789800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:24.630 [2024-11-26 19:08:55.789817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.837608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.837746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:24.630 [2024-11-26 19:08:55.837801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.738 ms 00:24:24.630 [2024-11-26 19:08:55.837822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.837998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.630 [2024-11-26 19:08:55.838029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:24.630 [2024-11-26 19:08:55.838050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:24.630 [2024-11-26 19:08:55.838069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.630 [2024-11-26 19:08:55.839846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.396 ms, result 0 00:24:26.005  [2024-11-26T19:08:58.155Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T19:08:59.529Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-26T19:09:00.463Z] Copying: 78/1024 [MB] (22 MBps) [2024-11-26T19:09:01.399Z] Copying: 101/1024 [MB] (22 MBps) [2024-11-26T19:09:02.333Z] Copying: 125/1024 [MB] (24 MBps) [2024-11-26T19:09:03.266Z] Copying: 151/1024 [MB] (25 MBps) [2024-11-26T19:09:04.200Z] Copying: 176/1024 [MB] (25 MBps) [2024-11-26T19:09:05.134Z] Copying: 198/1024 [MB] (21 MBps) [2024-11-26T19:09:06.509Z] Copying: 221/1024 [MB] (23 MBps) [2024-11-26T19:09:07.444Z] Copying: 246/1024 [MB] (24 MBps) [2024-11-26T19:09:08.378Z] Copying: 271/1024 [MB] (24 MBps) [2024-11-26T19:09:09.316Z] Copying: 298/1024 [MB] (27 MBps) [2024-11-26T19:09:10.250Z] Copying: 325/1024 [MB] (26 MBps) [2024-11-26T19:09:11.182Z] Copying: 351/1024 [MB] (26 MBps) [2024-11-26T19:09:12.117Z] Copying: 376/1024 [MB] (24 MBps) [2024-11-26T19:09:13.494Z] Copying: 404/1024 [MB] (27 MBps) [2024-11-26T19:09:14.429Z] Copying: 431/1024 [MB] (26 MBps) [2024-11-26T19:09:15.363Z] Copying: 456/1024 [MB] (25 MBps) [2024-11-26T19:09:16.298Z] Copying: 485/1024 [MB] (28 MBps) [2024-11-26T19:09:17.232Z] Copying: 511/1024 [MB] (26 MBps) [2024-11-26T19:09:18.168Z] Copying: 539/1024 [MB] (28 MBps) [2024-11-26T19:09:19.103Z] Copying: 565/1024 [MB] (25 MBps) [2024-11-26T19:09:20.488Z] Copying: 590/1024 [MB] (25 MBps) [2024-11-26T19:09:21.421Z] Copying: 616/1024 [MB] (25 MBps) [2024-11-26T19:09:22.358Z] Copying: 642/1024 [MB] (26 MBps) [2024-11-26T19:09:23.294Z] Copying: 667/1024 [MB] (24 MBps) [2024-11-26T19:09:24.228Z] Copying: 692/1024 [MB] (24 MBps) [2024-11-26T19:09:25.164Z] Copying: 720/1024 [MB] (28 MBps) [2024-11-26T19:09:26.536Z] Copying: 748/1024 [MB] (28 MBps) [2024-11-26T19:09:27.103Z] Copying: 776/1024 [MB] (27 MBps) [2024-11-26T19:09:28.474Z] Copying: 799/1024 [MB] (22 MBps) [2024-11-26T19:09:29.406Z] Copying: 827/1024 [MB] (28 MBps) [2024-11-26T19:09:30.337Z] Copying: 854/1024 [MB] (26 MBps) [2024-11-26T19:09:31.272Z] Copying: 882/1024 [MB] (27 MBps) [2024-11-26T19:09:32.206Z] Copying: 910/1024 [MB] (28 MBps) [2024-11-26T19:09:33.139Z] Copying: 937/1024 [MB] (27 MBps) [2024-11-26T19:09:34.511Z] Copying: 965/1024 [MB] (27 MBps) [2024-11-26T19:09:35.119Z] Copying: 995/1024 [MB] (29 MBps) [2024-11-26T19:09:35.378Z] Copying: 1023/1024 [MB] (27 MBps) [2024-11-26T19:09:35.638Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 19:09:35.466991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.467088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:04.423 [2024-11-26 19:09:35.467116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:04.423 [2024-11-26 19:09:35.467133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.467191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:04.423 [2024-11-26 19:09:35.473700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.473786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:04.423 [2024-11-26 19:09:35.473805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.469 ms 00:25:04.423 [2024-11-26 19:09:35.473816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.474094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.474119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:04.423 [2024-11-26 19:09:35.474133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:25:04.423 [2024-11-26 19:09:35.474144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.478202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.478248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:04.423 [2024-11-26 19:09:35.478265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:25:04.423 [2024-11-26 19:09:35.478284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.485649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.485713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:04.423 [2024-11-26 19:09:35.485731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.324 ms 00:25:04.423 [2024-11-26 19:09:35.485743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.518447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.518532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:04.423 [2024-11-26 19:09:35.518554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.583 ms 00:25:04.423 [2024-11-26 19:09:35.518567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.537600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.537691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:04.423 [2024-11-26 19:09:35.537713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.969 ms 00:25:04.423 [2024-11-26 19:09:35.537726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.537972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.537994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:04.423 [2024-11-26 19:09:35.538008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:25:04.423 [2024-11-26 19:09:35.538019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.573034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.573116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:04.423 [2024-11-26 19:09:35.573137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.988 ms 00:25:04.423 [2024-11-26 19:09:35.573151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.423 [2024-11-26 19:09:35.605811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.423 [2024-11-26 19:09:35.605897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:04.423 [2024-11-26 19:09:35.605918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.589 ms 00:25:04.423 [2024-11-26 19:09:35.605930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.682 [2024-11-26 19:09:35.638477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.682 [2024-11-26 19:09:35.638580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:04.682 [2024-11-26 19:09:35.638601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.485 ms 00:25:04.683 [2024-11-26 19:09:35.638613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.683 [2024-11-26 19:09:35.671434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.683 [2024-11-26 19:09:35.671515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:04.683 [2024-11-26 19:09:35.671541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.695 ms 00:25:04.683 [2024-11-26 19:09:35.671553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.683 [2024-11-26 19:09:35.671608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:04.683 [2024-11-26 19:09:35.671657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.671998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:04.683 [2024-11-26 19:09:35.672654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:04.684 [2024-11-26 19:09:35.672881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:04.684 [2024-11-26 19:09:35.672893] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 61cf15d5-2808-44b7-8dd9-1f70723e1c5d 00:25:04.684 [2024-11-26 19:09:35.672905] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:04.684 [2024-11-26 19:09:35.672916] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:04.684 [2024-11-26 19:09:35.672927] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:04.684 [2024-11-26 19:09:35.672938] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:04.684 [2024-11-26 19:09:35.672966] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:04.684 [2024-11-26 19:09:35.672977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:04.684 [2024-11-26 19:09:35.672988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:04.684 [2024-11-26 19:09:35.672998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:04.684 [2024-11-26 19:09:35.673008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:04.684 [2024-11-26 19:09:35.673019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.684 [2024-11-26 19:09:35.673030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:04.684 [2024-11-26 19:09:35.673041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:25:04.684 [2024-11-26 19:09:35.673057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.690384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.684 [2024-11-26 19:09:35.690459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:04.684 [2024-11-26 19:09:35.690482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.256 ms 00:25:04.684 [2024-11-26 19:09:35.690494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.690960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.684 [2024-11-26 19:09:35.690983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:04.684 [2024-11-26 19:09:35.691006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:25:04.684 [2024-11-26 19:09:35.691017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.734796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.684 [2024-11-26 19:09:35.734872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.684 [2024-11-26 19:09:35.734891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.684 [2024-11-26 19:09:35.734903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.734990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.684 [2024-11-26 19:09:35.735007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.684 [2024-11-26 19:09:35.735027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.684 [2024-11-26 19:09:35.735038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.735146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.684 [2024-11-26 19:09:35.735182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.684 [2024-11-26 19:09:35.735200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.684 [2024-11-26 19:09:35.735211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.735235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.684 [2024-11-26 19:09:35.735249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.684 [2024-11-26 19:09:35.735261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.684 [2024-11-26 19:09:35.735279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.684 [2024-11-26 19:09:35.840098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.684 [2024-11-26 19:09:35.840182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.684 [2024-11-26 19:09:35.840203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.684 [2024-11-26 19:09:35.840215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.927253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.927326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.943 [2024-11-26 19:09:35.927360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.927372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.927484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.927502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.943 [2024-11-26 19:09:35.927516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.927527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.927573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.927589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.943 [2024-11-26 19:09:35.927601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.927612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.927758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.927780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.943 [2024-11-26 19:09:35.927793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.927804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.927865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.927885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:04.943 [2024-11-26 19:09:35.927898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.927909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.927961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.927977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.943 [2024-11-26 19:09:35.927989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.928000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.928052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.943 [2024-11-26 19:09:35.928070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.943 [2024-11-26 19:09:35.928082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.943 [2024-11-26 19:09:35.928092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.943 [2024-11-26 19:09:35.928264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.247 ms, result 0 00:25:05.878 00:25:05.878 00:25:05.878 19:09:36 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:08.410 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:08.410 19:09:39 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:08.410 [2024-11-26 19:09:39.240073] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:25:08.410 [2024-11-26 19:09:39.240270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80312 ] 00:25:08.410 [2024-11-26 19:09:39.421796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.410 [2024-11-26 19:09:39.526325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.669 [2024-11-26 19:09:39.857451] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.669 [2024-11-26 19:09:39.857559] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.929 [2024-11-26 19:09:40.020278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.020354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.929 [2024-11-26 19:09:40.020376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:08.929 [2024-11-26 19:09:40.020388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.020472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.020494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.929 [2024-11-26 19:09:40.020507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:08.929 [2024-11-26 19:09:40.020518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.020549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.929 [2024-11-26 19:09:40.021587] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.929 [2024-11-26 19:09:40.021625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.021639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.929 [2024-11-26 19:09:40.021651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:25:08.929 [2024-11-26 19:09:40.021663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.022933] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:08.929 [2024-11-26 19:09:40.040636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.040728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:08.929 [2024-11-26 19:09:40.040750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.697 ms 00:25:08.929 [2024-11-26 19:09:40.040762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.040917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.040939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:08.929 [2024-11-26 19:09:40.040952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:08.929 [2024-11-26 19:09:40.040963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.046281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.046362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.929 [2024-11-26 19:09:40.046384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.165 ms 00:25:08.929 [2024-11-26 19:09:40.046407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.046534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.046555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.929 [2024-11-26 19:09:40.046568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:08.929 [2024-11-26 19:09:40.046579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.046663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.046682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.929 [2024-11-26 19:09:40.046695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:08.929 [2024-11-26 19:09:40.046706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.046748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.929 [2024-11-26 19:09:40.051233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.051301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.929 [2024-11-26 19:09:40.051326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.491 ms 00:25:08.929 [2024-11-26 19:09:40.051339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.051398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.051415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.929 [2024-11-26 19:09:40.051427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:08.929 [2024-11-26 19:09:40.051438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.051541] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:08.929 [2024-11-26 19:09:40.051575] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:08.929 [2024-11-26 19:09:40.051620] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:08.929 [2024-11-26 19:09:40.051662] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:08.929 [2024-11-26 19:09:40.051790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:08.929 [2024-11-26 19:09:40.051812] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.929 [2024-11-26 19:09:40.051828] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:08.929 [2024-11-26 19:09:40.051844] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.929 [2024-11-26 19:09:40.051858] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.929 [2024-11-26 19:09:40.051869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:08.929 [2024-11-26 19:09:40.051880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.929 [2024-11-26 19:09:40.051896] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:08.929 [2024-11-26 19:09:40.051907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:08.929 [2024-11-26 19:09:40.051919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.051930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.929 [2024-11-26 19:09:40.051942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:25:08.929 [2024-11-26 19:09:40.051953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.052057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.929 [2024-11-26 19:09:40.052073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.929 [2024-11-26 19:09:40.052086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:08.929 [2024-11-26 19:09:40.052097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.929 [2024-11-26 19:09:40.052254] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.929 [2024-11-26 19:09:40.052278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.929 [2024-11-26 19:09:40.052291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.929 [2024-11-26 19:09:40.052303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.929 [2024-11-26 19:09:40.052314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.929 [2024-11-26 19:09:40.052324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.929 [2024-11-26 19:09:40.052335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:08.929 [2024-11-26 19:09:40.052345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.929 [2024-11-26 19:09:40.052355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:08.929 [2024-11-26 19:09:40.052368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.929 [2024-11-26 19:09:40.052378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.929 [2024-11-26 19:09:40.052388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:08.929 [2024-11-26 19:09:40.052398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.929 [2024-11-26 19:09:40.052423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.929 [2024-11-26 19:09:40.052435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:08.930 [2024-11-26 19:09:40.052445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.930 [2024-11-26 19:09:40.052465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.930 [2024-11-26 19:09:40.052498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.930 [2024-11-26 19:09:40.052528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.930 [2024-11-26 19:09:40.052557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.930 [2024-11-26 19:09:40.052587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.930 [2024-11-26 19:09:40.052617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.930 [2024-11-26 19:09:40.052637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.930 [2024-11-26 19:09:40.052647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:08.930 [2024-11-26 19:09:40.052656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.930 [2024-11-26 19:09:40.052667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:08.930 [2024-11-26 19:09:40.052677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:08.930 [2024-11-26 19:09:40.052687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:08.930 [2024-11-26 19:09:40.052708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:08.930 [2024-11-26 19:09:40.052718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052728] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.930 [2024-11-26 19:09:40.052740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.930 [2024-11-26 19:09:40.052751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.930 [2024-11-26 19:09:40.052773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.930 [2024-11-26 19:09:40.052783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.930 [2024-11-26 19:09:40.052793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.930 [2024-11-26 19:09:40.052803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.930 [2024-11-26 19:09:40.052813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.930 [2024-11-26 19:09:40.052823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.930 [2024-11-26 19:09:40.052845] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.930 [2024-11-26 19:09:40.052860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.052877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:08.930 [2024-11-26 19:09:40.052889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:08.930 [2024-11-26 19:09:40.052900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:08.930 [2024-11-26 19:09:40.052911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:08.930 [2024-11-26 19:09:40.052922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:08.930 [2024-11-26 19:09:40.052933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:08.930 [2024-11-26 19:09:40.052944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:08.930 [2024-11-26 19:09:40.052955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:08.930 [2024-11-26 19:09:40.052966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:08.930 [2024-11-26 19:09:40.052977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.052988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.052999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.053010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.053021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:08.930 [2024-11-26 19:09:40.053033] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.930 [2024-11-26 19:09:40.053046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.053058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.930 [2024-11-26 19:09:40.053069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.930 [2024-11-26 19:09:40.053081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.930 [2024-11-26 19:09:40.053093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.930 [2024-11-26 19:09:40.053105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.053117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.930 [2024-11-26 19:09:40.053128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:25:08.930 [2024-11-26 19:09:40.053139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.930 [2024-11-26 19:09:40.086927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.086999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.930 [2024-11-26 19:09:40.087020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.705 ms 00:25:08.930 [2024-11-26 19:09:40.087047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.930 [2024-11-26 19:09:40.087205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.087226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.930 [2024-11-26 19:09:40.087239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:08.930 [2024-11-26 19:09:40.087250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.930 [2024-11-26 19:09:40.135065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.135139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.930 [2024-11-26 19:09:40.135160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.701 ms 00:25:08.930 [2024-11-26 19:09:40.135185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.930 [2024-11-26 19:09:40.135273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.135292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.930 [2024-11-26 19:09:40.135312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:08.930 [2024-11-26 19:09:40.135323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.930 [2024-11-26 19:09:40.135825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.135872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.930 [2024-11-26 19:09:40.135889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:25:08.930 [2024-11-26 19:09:40.135901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.930 [2024-11-26 19:09:40.136073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.930 [2024-11-26 19:09:40.136099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.930 [2024-11-26 19:09:40.136121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:08.930 [2024-11-26 19:09:40.136132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.189 [2024-11-26 19:09:40.153850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.189 [2024-11-26 19:09:40.153933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.189 [2024-11-26 19:09:40.153954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.680 ms 00:25:09.190 [2024-11-26 19:09:40.153967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.171162] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:09.190 [2024-11-26 19:09:40.171256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.190 [2024-11-26 19:09:40.171280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.171293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.190 [2024-11-26 19:09:40.171309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.115 ms 00:25:09.190 [2024-11-26 19:09:40.171320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.202337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.202428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.190 [2024-11-26 19:09:40.202448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.917 ms 00:25:09.190 [2024-11-26 19:09:40.202461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.219216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.219296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.190 [2024-11-26 19:09:40.219316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.599 ms 00:25:09.190 [2024-11-26 19:09:40.219328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.235981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.236066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.190 [2024-11-26 19:09:40.236087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.555 ms 00:25:09.190 [2024-11-26 19:09:40.236101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.237055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.237091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.190 [2024-11-26 19:09:40.237111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:25:09.190 [2024-11-26 19:09:40.237123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.313670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.313771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.190 [2024-11-26 19:09:40.313810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.514 ms 00:25:09.190 [2024-11-26 19:09:40.313823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.327051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.190 [2024-11-26 19:09:40.329941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.329993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.190 [2024-11-26 19:09:40.330013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.017 ms 00:25:09.190 [2024-11-26 19:09:40.330025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.330167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.330205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.190 [2024-11-26 19:09:40.330223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.190 [2024-11-26 19:09:40.330234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.330357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.330378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.190 [2024-11-26 19:09:40.330391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:09.190 [2024-11-26 19:09:40.330401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.330437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.330452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.190 [2024-11-26 19:09:40.330464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.190 [2024-11-26 19:09:40.330475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.330524] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.190 [2024-11-26 19:09:40.330541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.330552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.190 [2024-11-26 19:09:40.330564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:09.190 [2024-11-26 19:09:40.330575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.363619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.363722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.190 [2024-11-26 19:09:40.363760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.014 ms 00:25:09.190 [2024-11-26 19:09:40.363772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.363919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.190 [2024-11-26 19:09:40.363940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.190 [2024-11-26 19:09:40.363953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:09.190 [2024-11-26 19:09:40.363964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.190 [2024-11-26 19:09:40.365354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.514 ms, result 0 00:25:10.601  [2024-11-26T19:09:42.384Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T19:09:43.762Z] Copying: 57/1024 [MB] (30 MBps) [2024-11-26T19:09:44.699Z] Copying: 88/1024 [MB] (30 MBps) [2024-11-26T19:09:45.635Z] Copying: 117/1024 [MB] (28 MBps) [2024-11-26T19:09:46.571Z] Copying: 147/1024 [MB] (30 MBps) [2024-11-26T19:09:47.565Z] Copying: 176/1024 [MB] (28 MBps) [2024-11-26T19:09:48.499Z] Copying: 204/1024 [MB] (28 MBps) [2024-11-26T19:09:49.439Z] Copying: 233/1024 [MB] (29 MBps) [2024-11-26T19:09:50.818Z] Copying: 260/1024 [MB] (27 MBps) [2024-11-26T19:09:51.385Z] Copying: 291/1024 [MB] (30 MBps) [2024-11-26T19:09:52.857Z] Copying: 320/1024 [MB] (28 MBps) [2024-11-26T19:09:53.424Z] Copying: 349/1024 [MB] (28 MBps) [2024-11-26T19:09:54.799Z] Copying: 380/1024 [MB] (30 MBps) [2024-11-26T19:09:55.732Z] Copying: 411/1024 [MB] (30 MBps) [2024-11-26T19:09:56.665Z] Copying: 440/1024 [MB] (28 MBps) [2024-11-26T19:09:57.600Z] Copying: 470/1024 [MB] (30 MBps) [2024-11-26T19:09:58.536Z] Copying: 500/1024 [MB] (29 MBps) [2024-11-26T19:09:59.538Z] Copying: 529/1024 [MB] (29 MBps) [2024-11-26T19:10:00.472Z] Copying: 559/1024 [MB] (29 MBps) [2024-11-26T19:10:01.406Z] Copying: 589/1024 [MB] (30 MBps) [2024-11-26T19:10:02.779Z] Copying: 618/1024 [MB] (28 MBps) [2024-11-26T19:10:03.714Z] Copying: 650/1024 [MB] (31 MBps) [2024-11-26T19:10:04.649Z] Copying: 680/1024 [MB] (30 MBps) [2024-11-26T19:10:05.583Z] Copying: 711/1024 [MB] (30 MBps) [2024-11-26T19:10:06.581Z] Copying: 742/1024 [MB] (30 MBps) [2024-11-26T19:10:07.516Z] Copying: 771/1024 [MB] (29 MBps) [2024-11-26T19:10:08.450Z] Copying: 802/1024 [MB] (30 MBps) [2024-11-26T19:10:09.388Z] Copying: 831/1024 [MB] (29 MBps) [2024-11-26T19:10:10.764Z] Copying: 860/1024 [MB] (28 MBps) [2024-11-26T19:10:11.699Z] Copying: 890/1024 [MB] (30 MBps) [2024-11-26T19:10:12.697Z] Copying: 920/1024 [MB] (29 MBps) [2024-11-26T19:10:13.631Z] Copying: 948/1024 [MB] (27 MBps) [2024-11-26T19:10:14.567Z] Copying: 979/1024 [MB] (30 MBps) [2024-11-26T19:10:15.499Z] Copying: 1010/1024 [MB] (30 MBps) [2024-11-26T19:10:16.433Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-26T19:10:16.433Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-26 19:10:16.113512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.113868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:45.218 [2024-11-26 19:10:16.113918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:45.218 [2024-11-26 19:10:16.113931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.117490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:45.218 [2024-11-26 19:10:16.124033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.124126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:45.218 [2024-11-26 19:10:16.124147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:25:45.218 [2024-11-26 19:10:16.124203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.135783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.136090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:45.218 [2024-11-26 19:10:16.136126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.374 ms 00:25:45.218 [2024-11-26 19:10:16.136155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.157027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.157128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:45.218 [2024-11-26 19:10:16.157149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.806 ms 00:25:45.218 [2024-11-26 19:10:16.157161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.164123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.164210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:45.218 [2024-11-26 19:10:16.164227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.888 ms 00:25:45.218 [2024-11-26 19:10:16.164252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.197201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.197268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:45.218 [2024-11-26 19:10:16.197288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.876 ms 00:25:45.218 [2024-11-26 19:10:16.197301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.215984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.216067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:45.218 [2024-11-26 19:10:16.216089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.585 ms 00:25:45.218 [2024-11-26 19:10:16.216100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.291656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.291759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:45.218 [2024-11-26 19:10:16.291780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.428 ms 00:25:45.218 [2024-11-26 19:10:16.291794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.325149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.325249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:45.218 [2024-11-26 19:10:16.325270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.327 ms 00:25:45.218 [2024-11-26 19:10:16.325282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.358208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.358552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:45.218 [2024-11-26 19:10:16.358588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.824 ms 00:25:45.218 [2024-11-26 19:10:16.358610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.218 [2024-11-26 19:10:16.391026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.218 [2024-11-26 19:10:16.391116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:45.218 [2024-11-26 19:10:16.391138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.321 ms 00:25:45.219 [2024-11-26 19:10:16.391150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.219 [2024-11-26 19:10:16.424374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.219 [2024-11-26 19:10:16.424454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:45.219 [2024-11-26 19:10:16.424475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.718 ms 00:25:45.219 [2024-11-26 19:10:16.424487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.219 [2024-11-26 19:10:16.424576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:45.219 [2024-11-26 19:10:16.424603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:25:45.219 [2024-11-26 19:10:16.424617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.424989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:45.219 [2024-11-26 19:10:16.425582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:45.220 [2024-11-26 19:10:16.425851] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:45.220 [2024-11-26 19:10:16.425870] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 61cf15d5-2808-44b7-8dd9-1f70723e1c5d 00:25:45.220 [2024-11-26 19:10:16.425888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:25:45.220 [2024-11-26 19:10:16.425904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127424 00:25:45.220 [2024-11-26 19:10:16.425922] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:25:45.220 [2024-11-26 19:10:16.425938] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:25:45.220 [2024-11-26 19:10:16.425981] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:45.220 [2024-11-26 19:10:16.425994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:45.220 [2024-11-26 19:10:16.426004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:45.220 [2024-11-26 19:10:16.426014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:45.220 [2024-11-26 19:10:16.426023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:45.220 [2024-11-26 19:10:16.426035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.220 [2024-11-26 19:10:16.426046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:45.220 [2024-11-26 19:10:16.426058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:25:45.220 [2024-11-26 19:10:16.426069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.443694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.478 [2024-11-26 19:10:16.443775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:45.478 [2024-11-26 19:10:16.443808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.541 ms 00:25:45.478 [2024-11-26 19:10:16.443821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.444314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.478 [2024-11-26 19:10:16.444339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:45.478 [2024-11-26 19:10:16.444354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:25:45.478 [2024-11-26 19:10:16.444365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.487753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.478 [2024-11-26 19:10:16.487826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.478 [2024-11-26 19:10:16.487845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.478 [2024-11-26 19:10:16.487857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.487942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.478 [2024-11-26 19:10:16.487957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.478 [2024-11-26 19:10:16.487969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.478 [2024-11-26 19:10:16.487980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.488087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.478 [2024-11-26 19:10:16.488114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.478 [2024-11-26 19:10:16.488126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.478 [2024-11-26 19:10:16.488136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.488160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.478 [2024-11-26 19:10:16.488205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.478 [2024-11-26 19:10:16.488220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.478 [2024-11-26 19:10:16.488231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.596824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.478 [2024-11-26 19:10:16.596922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.478 [2024-11-26 19:10:16.596942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.478 [2024-11-26 19:10:16.596954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.684285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.478 [2024-11-26 19:10:16.684372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.478 [2024-11-26 19:10:16.684392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.478 [2024-11-26 19:10:16.684406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.478 [2024-11-26 19:10:16.684538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.479 [2024-11-26 19:10:16.684558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.479 [2024-11-26 19:10:16.684570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.479 [2024-11-26 19:10:16.684586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.479 [2024-11-26 19:10:16.684634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.479 [2024-11-26 19:10:16.684649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.479 [2024-11-26 19:10:16.684660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.479 [2024-11-26 19:10:16.684671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.479 [2024-11-26 19:10:16.684801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.479 [2024-11-26 19:10:16.684821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.479 [2024-11-26 19:10:16.684834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.479 [2024-11-26 19:10:16.684850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.479 [2024-11-26 19:10:16.684901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.479 [2024-11-26 19:10:16.684919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.479 [2024-11-26 19:10:16.684931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.479 [2024-11-26 19:10:16.684943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.479 [2024-11-26 19:10:16.684989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.479 [2024-11-26 19:10:16.685004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.479 [2024-11-26 19:10:16.685015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.479 [2024-11-26 19:10:16.685026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.479 [2024-11-26 19:10:16.685084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.479 [2024-11-26 19:10:16.685100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.479 [2024-11-26 19:10:16.685111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.479 [2024-11-26 19:10:16.685122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.479 [2024-11-26 19:10:16.685340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 573.978 ms, result 0 00:25:46.852 00:25:46.852 00:25:47.110 19:10:18 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:47.110 [2024-11-26 19:10:18.165448] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:25:47.110 [2024-11-26 19:10:18.165846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80688 ] 00:25:47.369 [2024-11-26 19:10:18.340810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.369 [2024-11-26 19:10:18.444629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.627 [2024-11-26 19:10:18.771140] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.627 [2024-11-26 19:10:18.771265] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.887 [2024-11-26 19:10:18.934064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.934424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.887 [2024-11-26 19:10:18.934461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:47.887 [2024-11-26 19:10:18.934476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.934578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.934602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.887 [2024-11-26 19:10:18.934616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:47.887 [2024-11-26 19:10:18.934628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.934662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.887 [2024-11-26 19:10:18.935659] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.887 [2024-11-26 19:10:18.935722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.935738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.887 [2024-11-26 19:10:18.935752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:25:47.887 [2024-11-26 19:10:18.935764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.937025] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:47.887 [2024-11-26 19:10:18.953920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.954263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:47.887 [2024-11-26 19:10:18.954298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.889 ms 00:25:47.887 [2024-11-26 19:10:18.954313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.954447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.954470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:47.887 [2024-11-26 19:10:18.954484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:47.887 [2024-11-26 19:10:18.954497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.959704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.959778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.887 [2024-11-26 19:10:18.959800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:25:47.887 [2024-11-26 19:10:18.959825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.959953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.959975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.887 [2024-11-26 19:10:18.959989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:47.887 [2024-11-26 19:10:18.960001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.960087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.960106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.887 [2024-11-26 19:10:18.960119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:47.887 [2024-11-26 19:10:18.960131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.960204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.887 [2024-11-26 19:10:18.964676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.964728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.887 [2024-11-26 19:10:18.964751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:25:47.887 [2024-11-26 19:10:18.964764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.964823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.964843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.887 [2024-11-26 19:10:18.964857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:47.887 [2024-11-26 19:10:18.964869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.964966] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:47.887 [2024-11-26 19:10:18.965001] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:47.887 [2024-11-26 19:10:18.965057] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:47.887 [2024-11-26 19:10:18.965085] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:47.887 [2024-11-26 19:10:18.965227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.887 [2024-11-26 19:10:18.965250] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.887 [2024-11-26 19:10:18.965268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:47.887 [2024-11-26 19:10:18.965291] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.887 [2024-11-26 19:10:18.965307] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.887 [2024-11-26 19:10:18.965319] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:47.887 [2024-11-26 19:10:18.965331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.887 [2024-11-26 19:10:18.965347] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.887 [2024-11-26 19:10:18.965359] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.887 [2024-11-26 19:10:18.965371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.965383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.887 [2024-11-26 19:10:18.965396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:25:47.887 [2024-11-26 19:10:18.965408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.887 [2024-11-26 19:10:18.965514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.887 [2024-11-26 19:10:18.965531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.887 [2024-11-26 19:10:18.965545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:47.887 [2024-11-26 19:10:18.965556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.888 [2024-11-26 19:10:18.965684] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.888 [2024-11-26 19:10:18.965706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.888 [2024-11-26 19:10:18.965719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.888 [2024-11-26 19:10:18.965731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.888 [2024-11-26 19:10:18.965754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:47.888 [2024-11-26 19:10:18.965777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.888 [2024-11-26 19:10:18.965788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.888 [2024-11-26 19:10:18.965810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.888 [2024-11-26 19:10:18.965821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:47.888 [2024-11-26 19:10:18.965831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.888 [2024-11-26 19:10:18.965864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.888 [2024-11-26 19:10:18.965883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:47.888 [2024-11-26 19:10:18.965894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.888 [2024-11-26 19:10:18.965917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:47.888 [2024-11-26 19:10:18.965928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.888 [2024-11-26 19:10:18.965949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.888 [2024-11-26 19:10:18.965971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.888 [2024-11-26 19:10:18.965982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:47.888 [2024-11-26 19:10:18.965992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.888 [2024-11-26 19:10:18.966003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.888 [2024-11-26 19:10:18.966013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:47.888 [2024-11-26 19:10:18.966024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.888 [2024-11-26 19:10:18.966034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.888 [2024-11-26 19:10:18.966045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:47.888 [2024-11-26 19:10:18.966056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.888 [2024-11-26 19:10:18.966066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.888 [2024-11-26 19:10:18.966077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:47.888 [2024-11-26 19:10:18.966087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.888 [2024-11-26 19:10:18.966098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.888 [2024-11-26 19:10:18.966108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:47.888 [2024-11-26 19:10:18.966118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.888 [2024-11-26 19:10:18.966129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.888 [2024-11-26 19:10:18.966139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:47.888 [2024-11-26 19:10:18.966150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.966160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.888 [2024-11-26 19:10:18.966536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:47.888 [2024-11-26 19:10:18.966741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.966925] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.888 [2024-11-26 19:10:18.967113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.888 [2024-11-26 19:10:18.967196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.888 [2024-11-26 19:10:18.967463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.888 [2024-11-26 19:10:18.967519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.888 [2024-11-26 19:10:18.967628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.888 [2024-11-26 19:10:18.967719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.888 [2024-11-26 19:10:18.967821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.888 [2024-11-26 19:10:18.967923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.888 [2024-11-26 19:10:18.967975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.888 [2024-11-26 19:10:18.968070] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.888 [2024-11-26 19:10:18.968221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:47.888 [2024-11-26 19:10:18.968398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:47.888 [2024-11-26 19:10:18.968410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:47.888 [2024-11-26 19:10:18.968422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:47.888 [2024-11-26 19:10:18.968433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:47.888 [2024-11-26 19:10:18.968445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:47.888 [2024-11-26 19:10:18.968456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:47.888 [2024-11-26 19:10:18.968468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:47.888 [2024-11-26 19:10:18.968479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:47.888 [2024-11-26 19:10:18.968490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:47.888 [2024-11-26 19:10:18.968548] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.888 [2024-11-26 19:10:18.968562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.888 [2024-11-26 19:10:18.968586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.888 [2024-11-26 19:10:18.968598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.888 [2024-11-26 19:10:18.968609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.888 [2024-11-26 19:10:18.968625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.888 [2024-11-26 19:10:18.968638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.888 [2024-11-26 19:10:18.968651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:25:47.888 [2024-11-26 19:10:18.968662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.888 [2024-11-26 19:10:19.003882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.888 [2024-11-26 19:10:19.004202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.888 [2024-11-26 19:10:19.004344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.128 ms 00:25:47.888 [2024-11-26 19:10:19.004411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.888 [2024-11-26 19:10:19.004620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.888 [2024-11-26 19:10:19.004673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:47.888 [2024-11-26 19:10:19.004778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:47.889 [2024-11-26 19:10:19.004921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.889 [2024-11-26 19:10:19.053088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.889 [2024-11-26 19:10:19.053452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.889 [2024-11-26 19:10:19.053585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.926 ms 00:25:47.889 [2024-11-26 19:10:19.053737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.889 [2024-11-26 19:10:19.053873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.889 [2024-11-26 19:10:19.054098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.889 [2024-11-26 19:10:19.054253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:47.889 [2024-11-26 19:10:19.054384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.889 [2024-11-26 19:10:19.054918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.889 [2024-11-26 19:10:19.055064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.889 [2024-11-26 19:10:19.055193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:25:47.889 [2024-11-26 19:10:19.055251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.889 [2024-11-26 19:10:19.055538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.889 [2024-11-26 19:10:19.055767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.889 [2024-11-26 19:10:19.055903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:25:47.889 [2024-11-26 19:10:19.056039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.889 [2024-11-26 19:10:19.073323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.889 [2024-11-26 19:10:19.073616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.889 [2024-11-26 19:10:19.073742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.199 ms 00:25:47.889 [2024-11-26 19:10:19.073854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.889 [2024-11-26 19:10:19.091423] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:47.889 [2024-11-26 19:10:19.091803] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:47.889 [2024-11-26 19:10:19.092085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.889 [2024-11-26 19:10:19.092201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:47.889 [2024-11-26 19:10:19.092262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.992 ms 00:25:47.889 [2024-11-26 19:10:19.092328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.151 [2024-11-26 19:10:19.123691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.151 [2024-11-26 19:10:19.124054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:48.151 [2024-11-26 19:10:19.124206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.173 ms 00:25:48.151 [2024-11-26 19:10:19.124323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.141345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.141675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:48.152 [2024-11-26 19:10:19.141812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.795 ms 00:25:48.152 [2024-11-26 19:10:19.141865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.158655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.158971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:48.152 [2024-11-26 19:10:19.159102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.665 ms 00:25:48.152 [2024-11-26 19:10:19.159242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.160297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.160447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:48.152 [2024-11-26 19:10:19.160584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:25:48.152 [2024-11-26 19:10:19.160641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.237879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.238202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:48.152 [2024-11-26 19:10:19.238368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.118 ms 00:25:48.152 [2024-11-26 19:10:19.238425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.251760] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:48.152 [2024-11-26 19:10:19.254789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.255000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:48.152 [2024-11-26 19:10:19.255125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.197 ms 00:25:48.152 [2024-11-26 19:10:19.255204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.255480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.255518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:48.152 [2024-11-26 19:10:19.255541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:48.152 [2024-11-26 19:10:19.255554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.257283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.257324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:48.152 [2024-11-26 19:10:19.257340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:25:48.152 [2024-11-26 19:10:19.257352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.257400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.257417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:48.152 [2024-11-26 19:10:19.257431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:48.152 [2024-11-26 19:10:19.257443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.257494] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:48.152 [2024-11-26 19:10:19.257512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.257524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:48.152 [2024-11-26 19:10:19.257537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:48.152 [2024-11-26 19:10:19.257549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.290226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.290316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:48.152 [2024-11-26 19:10:19.290355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.642 ms 00:25:48.152 [2024-11-26 19:10:19.290370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.290526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.152 [2024-11-26 19:10:19.290548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:48.152 [2024-11-26 19:10:19.290562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:48.152 [2024-11-26 19:10:19.290574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.152 [2024-11-26 19:10:19.292441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.213 ms, result 0 00:25:49.528  [2024-11-26T19:10:21.676Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-26T19:10:22.611Z] Copying: 53/1024 [MB] (27 MBps) [2024-11-26T19:10:23.547Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-26T19:10:24.937Z] Copying: 105/1024 [MB] (25 MBps) [2024-11-26T19:10:25.901Z] Copying: 130/1024 [MB] (25 MBps) [2024-11-26T19:10:26.837Z] Copying: 155/1024 [MB] (25 MBps) [2024-11-26T19:10:27.773Z] Copying: 183/1024 [MB] (27 MBps) [2024-11-26T19:10:28.710Z] Copying: 209/1024 [MB] (26 MBps) [2024-11-26T19:10:29.653Z] Copying: 235/1024 [MB] (25 MBps) [2024-11-26T19:10:30.589Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-26T19:10:31.534Z] Copying: 290/1024 [MB] (28 MBps) [2024-11-26T19:10:32.910Z] Copying: 315/1024 [MB] (25 MBps) [2024-11-26T19:10:33.844Z] Copying: 342/1024 [MB] (27 MBps) [2024-11-26T19:10:34.780Z] Copying: 369/1024 [MB] (26 MBps) [2024-11-26T19:10:35.714Z] Copying: 395/1024 [MB] (26 MBps) [2024-11-26T19:10:36.649Z] Copying: 424/1024 [MB] (28 MBps) [2024-11-26T19:10:37.592Z] Copying: 450/1024 [MB] (26 MBps) [2024-11-26T19:10:38.967Z] Copying: 475/1024 [MB] (24 MBps) [2024-11-26T19:10:39.533Z] Copying: 501/1024 [MB] (26 MBps) [2024-11-26T19:10:40.917Z] Copying: 526/1024 [MB] (25 MBps) [2024-11-26T19:10:41.853Z] Copying: 550/1024 [MB] (24 MBps) [2024-11-26T19:10:42.790Z] Copying: 577/1024 [MB] (26 MBps) [2024-11-26T19:10:43.791Z] Copying: 600/1024 [MB] (23 MBps) [2024-11-26T19:10:44.726Z] Copying: 627/1024 [MB] (26 MBps) [2024-11-26T19:10:45.661Z] Copying: 649/1024 [MB] (21 MBps) [2024-11-26T19:10:46.596Z] Copying: 675/1024 [MB] (25 MBps) [2024-11-26T19:10:47.531Z] Copying: 700/1024 [MB] (25 MBps) [2024-11-26T19:10:48.908Z] Copying: 724/1024 [MB] (24 MBps) [2024-11-26T19:10:49.841Z] Copying: 754/1024 [MB] (29 MBps) [2024-11-26T19:10:50.771Z] Copying: 778/1024 [MB] (23 MBps) [2024-11-26T19:10:51.705Z] Copying: 804/1024 [MB] (26 MBps) [2024-11-26T19:10:52.639Z] Copying: 827/1024 [MB] (22 MBps) [2024-11-26T19:10:53.573Z] Copying: 853/1024 [MB] (25 MBps) [2024-11-26T19:10:54.949Z] Copying: 880/1024 [MB] (27 MBps) [2024-11-26T19:10:55.885Z] Copying: 908/1024 [MB] (27 MBps) [2024-11-26T19:10:56.885Z] Copying: 933/1024 [MB] (25 MBps) [2024-11-26T19:10:57.820Z] Copying: 960/1024 [MB] (27 MBps) [2024-11-26T19:10:58.754Z] Copying: 988/1024 [MB] (27 MBps) [2024-11-26T19:10:59.011Z] Copying: 1015/1024 [MB] (27 MBps) [2024-11-26T19:10:59.577Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 19:10:59.296784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.362 [2024-11-26 19:10:59.296871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.362 [2024-11-26 19:10:59.296914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:28.362 [2024-11-26 19:10:59.296928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.362 [2024-11-26 19:10:59.296961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.362 [2024-11-26 19:10:59.301499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.362 [2024-11-26 19:10:59.301569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.362 [2024-11-26 19:10:59.301604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:26:28.362 [2024-11-26 19:10:59.301621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.362 [2024-11-26 19:10:59.301909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.362 [2024-11-26 19:10:59.301935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.362 [2024-11-26 19:10:59.301950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:26:28.362 [2024-11-26 19:10:59.301968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.362 [2024-11-26 19:10:59.306106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.362 [2024-11-26 19:10:59.306193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.362 [2024-11-26 19:10:59.306215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:26:28.363 [2024-11-26 19:10:59.306228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.315032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.315381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.363 [2024-11-26 19:10:59.315414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.740 ms 00:26:28.363 [2024-11-26 19:10:59.315447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.348662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.348770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.363 [2024-11-26 19:10:59.348795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.081 ms 00:26:28.363 [2024-11-26 19:10:59.348807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.367493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.367587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.363 [2024-11-26 19:10:59.367610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.572 ms 00:26:28.363 [2024-11-26 19:10:59.367623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.452303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.452431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.363 [2024-11-26 19:10:59.452456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.559 ms 00:26:28.363 [2024-11-26 19:10:59.452469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.485884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.485983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:28.363 [2024-11-26 19:10:59.486012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.382 ms 00:26:28.363 [2024-11-26 19:10:59.486024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.519355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.519444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:28.363 [2024-11-26 19:10:59.519464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.226 ms 00:26:28.363 [2024-11-26 19:10:59.519476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.363 [2024-11-26 19:10:59.551913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.363 [2024-11-26 19:10:59.552007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:28.363 [2024-11-26 19:10:59.552028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.343 ms 00:26:28.363 [2024-11-26 19:10:59.552041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.623 [2024-11-26 19:10:59.584876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.623 [2024-11-26 19:10:59.584977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:28.623 [2024-11-26 19:10:59.584999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.639 ms 00:26:28.623 [2024-11-26 19:10:59.585012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.623 [2024-11-26 19:10:59.585103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:28.623 [2024-11-26 19:10:59.585130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:28.623 [2024-11-26 19:10:59.585145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:28.623 [2024-11-26 19:10:59.585523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.585988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:28.624 [2024-11-26 19:10:59.586390] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:28.624 [2024-11-26 19:10:59.586402] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 61cf15d5-2808-44b7-8dd9-1f70723e1c5d 00:26:28.624 [2024-11-26 19:10:59.586414] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:28.624 [2024-11-26 19:10:59.586425] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5568 00:26:28.624 [2024-11-26 19:10:59.586436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4608 00:26:28.624 [2024-11-26 19:10:59.586449] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.2083 00:26:28.624 [2024-11-26 19:10:59.586470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:28.624 [2024-11-26 19:10:59.586497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:28.624 [2024-11-26 19:10:59.586509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:28.624 [2024-11-26 19:10:59.586519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:28.624 [2024-11-26 19:10:59.586529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:28.624 [2024-11-26 19:10:59.586541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.624 [2024-11-26 19:10:59.586553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:28.624 [2024-11-26 19:10:59.586566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:26:28.624 [2024-11-26 19:10:59.586578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.624 [2024-11-26 19:10:59.603897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.624 [2024-11-26 19:10:59.603982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:28.624 [2024-11-26 19:10:59.604018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.242 ms 00:26:28.624 [2024-11-26 19:10:59.604032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.624 [2024-11-26 19:10:59.604523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.624 [2024-11-26 19:10:59.604549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:28.624 [2024-11-26 19:10:59.604563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:26:28.624 [2024-11-26 19:10:59.604575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.624 [2024-11-26 19:10:59.650482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.624 [2024-11-26 19:10:59.650581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.625 [2024-11-26 19:10:59.650608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.625 [2024-11-26 19:10:59.650631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.625 [2024-11-26 19:10:59.650738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.625 [2024-11-26 19:10:59.650758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.625 [2024-11-26 19:10:59.650772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.625 [2024-11-26 19:10:59.650784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.625 [2024-11-26 19:10:59.650959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.625 [2024-11-26 19:10:59.650981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.625 [2024-11-26 19:10:59.651004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.625 [2024-11-26 19:10:59.651023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.625 [2024-11-26 19:10:59.651062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.625 [2024-11-26 19:10:59.651089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.625 [2024-11-26 19:10:59.651111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.625 [2024-11-26 19:10:59.651125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.625 [2024-11-26 19:10:59.758563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.625 [2024-11-26 19:10:59.758697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.625 [2024-11-26 19:10:59.758729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.625 [2024-11-26 19:10:59.758748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.864913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.865381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.884 [2024-11-26 19:10:59.865418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.865432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.865574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.865593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.884 [2024-11-26 19:10:59.865606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.865639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.865703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.865721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.884 [2024-11-26 19:10:59.865734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.865746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.865902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.865923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.884 [2024-11-26 19:10:59.865937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.865949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.866008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.866027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:28.884 [2024-11-26 19:10:59.866040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.866051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.866099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.866115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.884 [2024-11-26 19:10:59.866128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.866139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.866222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.884 [2024-11-26 19:10:59.866242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.884 [2024-11-26 19:10:59.866255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.884 [2024-11-26 19:10:59.866266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.884 [2024-11-26 19:10:59.866441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.605 ms, result 0 00:26:29.822 00:26:29.822 00:26:29.822 19:11:00 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:32.423 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79272 00:26:32.423 19:11:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79272 ']' 00:26:32.423 19:11:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79272 00:26:32.423 Process with pid 79272 is not found 00:26:32.423 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79272) - No such process 00:26:32.423 19:11:03 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79272 is not found' 00:26:32.423 Remove shared memory files 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:32.423 19:11:03 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:32.423 ************************************ 00:26:32.423 END TEST ftl_restore 00:26:32.423 ************************************ 00:26:32.423 00:26:32.423 real 3m5.308s 00:26:32.423 user 2m49.680s 00:26:32.424 sys 0m18.864s 00:26:32.424 19:11:03 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.424 19:11:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 19:11:03 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:32.424 19:11:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:32.424 19:11:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.424 19:11:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 ************************************ 00:26:32.424 START TEST ftl_dirty_shutdown 00:26:32.424 ************************************ 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:32.424 * Looking for test storage... 00:26:32.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:32.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.424 --rc genhtml_branch_coverage=1 00:26:32.424 --rc genhtml_function_coverage=1 00:26:32.424 --rc genhtml_legend=1 00:26:32.424 --rc geninfo_all_blocks=1 00:26:32.424 --rc geninfo_unexecuted_blocks=1 00:26:32.424 00:26:32.424 ' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:32.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.424 --rc genhtml_branch_coverage=1 00:26:32.424 --rc genhtml_function_coverage=1 00:26:32.424 --rc genhtml_legend=1 00:26:32.424 --rc geninfo_all_blocks=1 00:26:32.424 --rc geninfo_unexecuted_blocks=1 00:26:32.424 00:26:32.424 ' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:32.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.424 --rc genhtml_branch_coverage=1 00:26:32.424 --rc genhtml_function_coverage=1 00:26:32.424 --rc genhtml_legend=1 00:26:32.424 --rc geninfo_all_blocks=1 00:26:32.424 --rc geninfo_unexecuted_blocks=1 00:26:32.424 00:26:32.424 ' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:32.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.424 --rc genhtml_branch_coverage=1 00:26:32.424 --rc genhtml_function_coverage=1 00:26:32.424 --rc genhtml_legend=1 00:26:32.424 --rc geninfo_all_blocks=1 00:26:32.424 --rc geninfo_unexecuted_blocks=1 00:26:32.424 00:26:32.424 ' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81197 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81197 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81197 ']' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.424 19:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 [2024-11-26 19:11:03.590761] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:26:32.424 [2024-11-26 19:11:03.591164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81197 ] 00:26:32.683 [2024-11-26 19:11:03.760406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.683 [2024-11-26 19:11:03.863703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:33.616 19:11:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:33.874 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:34.439 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:34.439 { 00:26:34.439 "name": "nvme0n1", 00:26:34.440 "aliases": [ 00:26:34.440 "55404f1e-1c79-492d-93d7-d49fe83130c3" 00:26:34.440 ], 00:26:34.440 "product_name": "NVMe disk", 00:26:34.440 "block_size": 4096, 00:26:34.440 "num_blocks": 1310720, 00:26:34.440 "uuid": "55404f1e-1c79-492d-93d7-d49fe83130c3", 00:26:34.440 "numa_id": -1, 00:26:34.440 "assigned_rate_limits": { 00:26:34.440 "rw_ios_per_sec": 0, 00:26:34.440 "rw_mbytes_per_sec": 0, 00:26:34.440 "r_mbytes_per_sec": 0, 00:26:34.440 "w_mbytes_per_sec": 0 00:26:34.440 }, 00:26:34.440 "claimed": true, 00:26:34.440 "claim_type": "read_many_write_one", 00:26:34.440 "zoned": false, 00:26:34.440 "supported_io_types": { 00:26:34.440 "read": true, 00:26:34.440 "write": true, 00:26:34.440 "unmap": true, 00:26:34.440 "flush": true, 00:26:34.495 "reset": true, 00:26:34.495 "nvme_admin": true, 00:26:34.495 "nvme_io": true, 00:26:34.495 "nvme_io_md": false, 00:26:34.495 "write_zeroes": true, 00:26:34.495 "zcopy": false, 00:26:34.495 "get_zone_info": false, 00:26:34.495 "zone_management": false, 00:26:34.495 "zone_append": false, 00:26:34.495 "compare": true, 00:26:34.495 "compare_and_write": false, 00:26:34.495 "abort": true, 00:26:34.495 "seek_hole": false, 00:26:34.495 "seek_data": false, 00:26:34.495 "copy": true, 00:26:34.495 "nvme_iov_md": false 00:26:34.495 }, 00:26:34.495 "driver_specific": { 00:26:34.495 "nvme": [ 00:26:34.495 { 00:26:34.495 "pci_address": "0000:00:11.0", 00:26:34.495 "trid": { 00:26:34.495 "trtype": "PCIe", 00:26:34.495 "traddr": "0000:00:11.0" 00:26:34.495 }, 00:26:34.496 "ctrlr_data": { 00:26:34.496 "cntlid": 0, 00:26:34.496 "vendor_id": "0x1b36", 00:26:34.496 "model_number": "QEMU NVMe Ctrl", 00:26:34.496 "serial_number": "12341", 00:26:34.496 "firmware_revision": "8.0.0", 00:26:34.496 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:34.496 "oacs": { 00:26:34.496 "security": 0, 00:26:34.496 "format": 1, 00:26:34.496 "firmware": 0, 00:26:34.496 "ns_manage": 1 00:26:34.496 }, 00:26:34.496 "multi_ctrlr": false, 00:26:34.496 "ana_reporting": false 00:26:34.496 }, 00:26:34.496 "vs": { 00:26:34.496 "nvme_version": "1.4" 00:26:34.496 }, 00:26:34.496 "ns_data": { 00:26:34.496 "id": 1, 00:26:34.496 "can_share": false 00:26:34.496 } 00:26:34.496 } 00:26:34.496 ], 00:26:34.496 "mp_policy": "active_passive" 00:26:34.496 } 00:26:34.496 } 00:26:34.496 ]' 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:34.496 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:34.754 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b8d4e7ea-7f42-4def-abe8-59264bf90b77 00:26:34.754 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:34.754 19:11:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8d4e7ea-7f42-4def-abe8-59264bf90b77 00:26:35.011 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:35.575 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a82b8d63-0b8a-4466-908b-59f9a65880e3 00:26:35.575 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a82b8d63-0b8a-4466-908b-59f9a65880e3 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=23676590-0d73-423e-a7ba-759e62e6200b 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 23676590-0d73-423e-a7ba-759e62e6200b 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=23676590-0d73-423e-a7ba-759e62e6200b 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 23676590-0d73-423e-a7ba-759e62e6200b 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=23676590-0d73-423e-a7ba-759e62e6200b 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:35.833 19:11:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23676590-0d73-423e-a7ba-759e62e6200b 00:26:36.091 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:36.091 { 00:26:36.091 "name": "23676590-0d73-423e-a7ba-759e62e6200b", 00:26:36.091 "aliases": [ 00:26:36.091 "lvs/nvme0n1p0" 00:26:36.091 ], 00:26:36.091 "product_name": "Logical Volume", 00:26:36.091 "block_size": 4096, 00:26:36.091 "num_blocks": 26476544, 00:26:36.091 "uuid": "23676590-0d73-423e-a7ba-759e62e6200b", 00:26:36.091 "assigned_rate_limits": { 00:26:36.091 "rw_ios_per_sec": 0, 00:26:36.091 "rw_mbytes_per_sec": 0, 00:26:36.091 "r_mbytes_per_sec": 0, 00:26:36.091 "w_mbytes_per_sec": 0 00:26:36.091 }, 00:26:36.091 "claimed": false, 00:26:36.091 "zoned": false, 00:26:36.091 "supported_io_types": { 00:26:36.091 "read": true, 00:26:36.091 "write": true, 00:26:36.091 "unmap": true, 00:26:36.091 "flush": false, 00:26:36.091 "reset": true, 00:26:36.091 "nvme_admin": false, 00:26:36.091 "nvme_io": false, 00:26:36.091 "nvme_io_md": false, 00:26:36.091 "write_zeroes": true, 00:26:36.091 "zcopy": false, 00:26:36.091 "get_zone_info": false, 00:26:36.091 "zone_management": false, 00:26:36.091 "zone_append": false, 00:26:36.091 "compare": false, 00:26:36.091 "compare_and_write": false, 00:26:36.091 "abort": false, 00:26:36.091 "seek_hole": true, 00:26:36.091 "seek_data": true, 00:26:36.091 "copy": false, 00:26:36.091 "nvme_iov_md": false 00:26:36.091 }, 00:26:36.091 "driver_specific": { 00:26:36.091 "lvol": { 00:26:36.091 "lvol_store_uuid": "a82b8d63-0b8a-4466-908b-59f9a65880e3", 00:26:36.091 "base_bdev": "nvme0n1", 00:26:36.091 "thin_provision": true, 00:26:36.091 "num_allocated_clusters": 0, 00:26:36.091 "snapshot": false, 00:26:36.091 "clone": false, 00:26:36.091 "esnap_clone": false 00:26:36.091 } 00:26:36.091 } 00:26:36.091 } 00:26:36.091 ]' 00:26:36.091 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:36.092 19:11:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 23676590-0d73-423e-a7ba-759e62e6200b 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=23676590-0d73-423e-a7ba-759e62e6200b 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:36.670 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23676590-0d73-423e-a7ba-759e62e6200b 00:26:36.928 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:36.928 { 00:26:36.928 "name": "23676590-0d73-423e-a7ba-759e62e6200b", 00:26:36.929 "aliases": [ 00:26:36.929 "lvs/nvme0n1p0" 00:26:36.929 ], 00:26:36.929 "product_name": "Logical Volume", 00:26:36.929 "block_size": 4096, 00:26:36.929 "num_blocks": 26476544, 00:26:36.929 "uuid": "23676590-0d73-423e-a7ba-759e62e6200b", 00:26:36.929 "assigned_rate_limits": { 00:26:36.929 "rw_ios_per_sec": 0, 00:26:36.929 "rw_mbytes_per_sec": 0, 00:26:36.929 "r_mbytes_per_sec": 0, 00:26:36.929 "w_mbytes_per_sec": 0 00:26:36.929 }, 00:26:36.929 "claimed": false, 00:26:36.929 "zoned": false, 00:26:36.929 "supported_io_types": { 00:26:36.929 "read": true, 00:26:36.929 "write": true, 00:26:36.929 "unmap": true, 00:26:36.929 "flush": false, 00:26:36.929 "reset": true, 00:26:36.929 "nvme_admin": false, 00:26:36.929 "nvme_io": false, 00:26:36.929 "nvme_io_md": false, 00:26:36.929 "write_zeroes": true, 00:26:36.929 "zcopy": false, 00:26:36.929 "get_zone_info": false, 00:26:36.929 "zone_management": false, 00:26:36.929 "zone_append": false, 00:26:36.929 "compare": false, 00:26:36.929 "compare_and_write": false, 00:26:36.929 "abort": false, 00:26:36.929 "seek_hole": true, 00:26:36.929 "seek_data": true, 00:26:36.929 "copy": false, 00:26:36.929 "nvme_iov_md": false 00:26:36.929 }, 00:26:36.929 "driver_specific": { 00:26:36.929 "lvol": { 00:26:36.929 "lvol_store_uuid": "a82b8d63-0b8a-4466-908b-59f9a65880e3", 00:26:36.929 "base_bdev": "nvme0n1", 00:26:36.929 "thin_provision": true, 00:26:36.929 "num_allocated_clusters": 0, 00:26:36.929 "snapshot": false, 00:26:36.929 "clone": false, 00:26:36.929 "esnap_clone": false 00:26:36.929 } 00:26:36.929 } 00:26:36.929 } 00:26:36.929 ]' 00:26:36.929 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:36.929 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:36.929 19:11:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:36.929 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:36.929 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:36.929 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:36.929 19:11:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:36.929 19:11:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 23676590-0d73-423e-a7ba-759e62e6200b 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=23676590-0d73-423e-a7ba-759e62e6200b 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:37.187 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23676590-0d73-423e-a7ba-759e62e6200b 00:26:37.447 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:37.447 { 00:26:37.447 "name": "23676590-0d73-423e-a7ba-759e62e6200b", 00:26:37.447 "aliases": [ 00:26:37.447 "lvs/nvme0n1p0" 00:26:37.447 ], 00:26:37.447 "product_name": "Logical Volume", 00:26:37.447 "block_size": 4096, 00:26:37.447 "num_blocks": 26476544, 00:26:37.447 "uuid": "23676590-0d73-423e-a7ba-759e62e6200b", 00:26:37.447 "assigned_rate_limits": { 00:26:37.447 "rw_ios_per_sec": 0, 00:26:37.447 "rw_mbytes_per_sec": 0, 00:26:37.447 "r_mbytes_per_sec": 0, 00:26:37.447 "w_mbytes_per_sec": 0 00:26:37.447 }, 00:26:37.447 "claimed": false, 00:26:37.447 "zoned": false, 00:26:37.447 "supported_io_types": { 00:26:37.447 "read": true, 00:26:37.447 "write": true, 00:26:37.447 "unmap": true, 00:26:37.447 "flush": false, 00:26:37.447 "reset": true, 00:26:37.447 "nvme_admin": false, 00:26:37.447 "nvme_io": false, 00:26:37.447 "nvme_io_md": false, 00:26:37.447 "write_zeroes": true, 00:26:37.447 "zcopy": false, 00:26:37.447 "get_zone_info": false, 00:26:37.447 "zone_management": false, 00:26:37.447 "zone_append": false, 00:26:37.447 "compare": false, 00:26:37.447 "compare_and_write": false, 00:26:37.447 "abort": false, 00:26:37.447 "seek_hole": true, 00:26:37.447 "seek_data": true, 00:26:37.447 "copy": false, 00:26:37.447 "nvme_iov_md": false 00:26:37.447 }, 00:26:37.447 "driver_specific": { 00:26:37.447 "lvol": { 00:26:37.447 "lvol_store_uuid": "a82b8d63-0b8a-4466-908b-59f9a65880e3", 00:26:37.447 "base_bdev": "nvme0n1", 00:26:37.447 "thin_provision": true, 00:26:37.447 "num_allocated_clusters": 0, 00:26:37.447 "snapshot": false, 00:26:37.447 "clone": false, 00:26:37.447 "esnap_clone": false 00:26:37.447 } 00:26:37.447 } 00:26:37.447 } 00:26:37.447 ]' 00:26:37.447 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:37.705 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:37.705 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:37.705 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:37.705 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:37.705 19:11:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:37.705 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:37.706 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 23676590-0d73-423e-a7ba-759e62e6200b --l2p_dram_limit 10' 00:26:37.706 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:37.706 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:37.706 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:37.706 19:11:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 23676590-0d73-423e-a7ba-759e62e6200b --l2p_dram_limit 10 -c nvc0n1p0 00:26:37.965 [2024-11-26 19:11:09.025105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.965 [2024-11-26 19:11:09.025209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:37.965 [2024-11-26 19:11:09.025239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:37.965 [2024-11-26 19:11:09.025252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.965 [2024-11-26 19:11:09.025360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.965 [2024-11-26 19:11:09.025381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.965 [2024-11-26 19:11:09.025397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:37.965 [2024-11-26 19:11:09.025409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.965 [2024-11-26 19:11:09.025443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:37.965 [2024-11-26 19:11:09.026523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:37.965 [2024-11-26 19:11:09.026566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.965 [2024-11-26 19:11:09.026580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.965 [2024-11-26 19:11:09.026596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:26:37.965 [2024-11-26 19:11:09.026608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.965 [2024-11-26 19:11:09.026774] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 023b4b4f-bcf0-4338-8c93-3af230e4a41f 00:26:37.965 [2024-11-26 19:11:09.027868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.965 [2024-11-26 19:11:09.027915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:37.965 [2024-11-26 19:11:09.027933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:37.965 [2024-11-26 19:11:09.027948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.965 [2024-11-26 19:11:09.032709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.965 [2024-11-26 19:11:09.032794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.965 [2024-11-26 19:11:09.032814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:26:37.965 [2024-11-26 19:11:09.032829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.965 [2024-11-26 19:11:09.032984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.965 [2024-11-26 19:11:09.033009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.966 [2024-11-26 19:11:09.033023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:37.966 [2024-11-26 19:11:09.033043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.966 [2024-11-26 19:11:09.033120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.966 [2024-11-26 19:11:09.033142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:37.966 [2024-11-26 19:11:09.033160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:37.966 [2024-11-26 19:11:09.033191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.966 [2024-11-26 19:11:09.033248] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:37.966 [2024-11-26 19:11:09.038048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.966 [2024-11-26 19:11:09.038109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.966 [2024-11-26 19:11:09.038132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:26:37.966 [2024-11-26 19:11:09.038145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.966 [2024-11-26 19:11:09.038235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.966 [2024-11-26 19:11:09.038254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:37.966 [2024-11-26 19:11:09.038270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:37.966 [2024-11-26 19:11:09.038282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.966 [2024-11-26 19:11:09.038359] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:37.966 [2024-11-26 19:11:09.038528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:37.966 [2024-11-26 19:11:09.038564] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:37.966 [2024-11-26 19:11:09.038590] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:37.966 [2024-11-26 19:11:09.038619] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:37.966 [2024-11-26 19:11:09.038635] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:37.966 [2024-11-26 19:11:09.038650] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:37.966 [2024-11-26 19:11:09.038670] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:37.966 [2024-11-26 19:11:09.038693] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:37.966 [2024-11-26 19:11:09.038705] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:37.966 [2024-11-26 19:11:09.038720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.966 [2024-11-26 19:11:09.038754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:37.966 [2024-11-26 19:11:09.038777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:26:37.966 [2024-11-26 19:11:09.038790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.966 [2024-11-26 19:11:09.038904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.966 [2024-11-26 19:11:09.038935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:37.966 [2024-11-26 19:11:09.038960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:37.966 [2024-11-26 19:11:09.038974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.966 [2024-11-26 19:11:09.039109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:37.966 [2024-11-26 19:11:09.039140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:37.966 [2024-11-26 19:11:09.039158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:37.966 [2024-11-26 19:11:09.039244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:37.966 [2024-11-26 19:11:09.039283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.966 [2024-11-26 19:11:09.039307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:37.966 [2024-11-26 19:11:09.039318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:37.966 [2024-11-26 19:11:09.039336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.966 [2024-11-26 19:11:09.039354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:37.966 [2024-11-26 19:11:09.039370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:37.966 [2024-11-26 19:11:09.039390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:37.966 [2024-11-26 19:11:09.039419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:37.966 [2024-11-26 19:11:09.039459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:37.966 [2024-11-26 19:11:09.039494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:37.966 [2024-11-26 19:11:09.039532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:37.966 [2024-11-26 19:11:09.039567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:37.966 [2024-11-26 19:11:09.039605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.966 [2024-11-26 19:11:09.039629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:37.966 [2024-11-26 19:11:09.039640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:37.966 [2024-11-26 19:11:09.039653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.966 [2024-11-26 19:11:09.039666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:37.966 [2024-11-26 19:11:09.039679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:37.966 [2024-11-26 19:11:09.039690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:37.966 [2024-11-26 19:11:09.039714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:37.966 [2024-11-26 19:11:09.039745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039758] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:37.966 [2024-11-26 19:11:09.039772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:37.966 [2024-11-26 19:11:09.039784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.966 [2024-11-26 19:11:09.039812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:37.966 [2024-11-26 19:11:09.039828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:37.966 [2024-11-26 19:11:09.039839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:37.966 [2024-11-26 19:11:09.039852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:37.966 [2024-11-26 19:11:09.039863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:37.966 [2024-11-26 19:11:09.039877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:37.966 [2024-11-26 19:11:09.039893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:37.966 [2024-11-26 19:11:09.039912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.966 [2024-11-26 19:11:09.039926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:37.966 [2024-11-26 19:11:09.039940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:37.966 [2024-11-26 19:11:09.039954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:37.966 [2024-11-26 19:11:09.039969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:37.966 [2024-11-26 19:11:09.039981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:37.966 [2024-11-26 19:11:09.039994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:37.966 [2024-11-26 19:11:09.040006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:37.966 [2024-11-26 19:11:09.040021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:37.966 [2024-11-26 19:11:09.040033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:37.966 [2024-11-26 19:11:09.040048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:37.966 [2024-11-26 19:11:09.040060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:37.966 [2024-11-26 19:11:09.040074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:37.967 [2024-11-26 19:11:09.040087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:37.967 [2024-11-26 19:11:09.040103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:37.967 [2024-11-26 19:11:09.040116] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:37.967 [2024-11-26 19:11:09.040131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.967 [2024-11-26 19:11:09.040144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:37.967 [2024-11-26 19:11:09.040158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:37.967 [2024-11-26 19:11:09.040183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:37.967 [2024-11-26 19:11:09.040200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:37.967 [2024-11-26 19:11:09.040214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.967 [2024-11-26 19:11:09.040228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:37.967 [2024-11-26 19:11:09.040241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:26:37.967 [2024-11-26 19:11:09.040254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.967 [2024-11-26 19:11:09.040310] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:37.967 [2024-11-26 19:11:09.040345] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:39.869 [2024-11-26 19:11:11.054413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.869 [2024-11-26 19:11:11.054732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:39.869 [2024-11-26 19:11:11.054863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2014.110 ms 00:26:39.869 [2024-11-26 19:11:11.054924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.088068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.088392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:40.128 [2024-11-26 19:11:11.088565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.733 ms 00:26:40.128 [2024-11-26 19:11:11.088628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.089073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.089232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:40.128 [2024-11-26 19:11:11.089356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:40.128 [2024-11-26 19:11:11.089422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.131021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.131346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:40.128 [2024-11-26 19:11:11.131527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.425 ms 00:26:40.128 [2024-11-26 19:11:11.131614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.131899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.131955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:40.128 [2024-11-26 19:11:11.131984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:40.128 [2024-11-26 19:11:11.132028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.132547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.132589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:40.128 [2024-11-26 19:11:11.132616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:26:40.128 [2024-11-26 19:11:11.132642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.132864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.132908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:40.128 [2024-11-26 19:11:11.132934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:26:40.128 [2024-11-26 19:11:11.132963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.152343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.152426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:40.128 [2024-11-26 19:11:11.152466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:26:40.128 [2024-11-26 19:11:11.152491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.176036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:40.128 [2024-11-26 19:11:11.179048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.179101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:40.128 [2024-11-26 19:11:11.179140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.354 ms 00:26:40.128 [2024-11-26 19:11:11.179163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.239248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.239538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:40.128 [2024-11-26 19:11:11.239579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.955 ms 00:26:40.128 [2024-11-26 19:11:11.239594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.239889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.239912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:40.128 [2024-11-26 19:11:11.239932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:26:40.128 [2024-11-26 19:11:11.239945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.128 [2024-11-26 19:11:11.273384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.128 [2024-11-26 19:11:11.273475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:40.128 [2024-11-26 19:11:11.273502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.291 ms 00:26:40.128 [2024-11-26 19:11:11.273516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.129 [2024-11-26 19:11:11.306329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.129 [2024-11-26 19:11:11.306636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:40.129 [2024-11-26 19:11:11.306691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.703 ms 00:26:40.129 [2024-11-26 19:11:11.306708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.129 [2024-11-26 19:11:11.307540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.129 [2024-11-26 19:11:11.307573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:40.129 [2024-11-26 19:11:11.307596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:26:40.129 [2024-11-26 19:11:11.307608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.392664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.388 [2024-11-26 19:11:11.392759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:40.388 [2024-11-26 19:11:11.392791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.930 ms 00:26:40.388 [2024-11-26 19:11:11.392805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.427386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.388 [2024-11-26 19:11:11.427485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:40.388 [2024-11-26 19:11:11.427511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.375 ms 00:26:40.388 [2024-11-26 19:11:11.427525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.461217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.388 [2024-11-26 19:11:11.461304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:40.388 [2024-11-26 19:11:11.461330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.545 ms 00:26:40.388 [2024-11-26 19:11:11.461343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.495205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.388 [2024-11-26 19:11:11.495295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:40.388 [2024-11-26 19:11:11.495320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.742 ms 00:26:40.388 [2024-11-26 19:11:11.495333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.495442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.388 [2024-11-26 19:11:11.495462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:40.388 [2024-11-26 19:11:11.495482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:40.388 [2024-11-26 19:11:11.495494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.495691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.388 [2024-11-26 19:11:11.495717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:40.388 [2024-11-26 19:11:11.495747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:40.388 [2024-11-26 19:11:11.495760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.388 [2024-11-26 19:11:11.496926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2471.334 ms, result 0 00:26:40.388 { 00:26:40.388 "name": "ftl0", 00:26:40.388 "uuid": "023b4b4f-bcf0-4338-8c93-3af230e4a41f" 00:26:40.388 } 00:26:40.388 19:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:40.388 19:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:40.647 19:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:40.647 19:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:40.647 19:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:41.213 /dev/nbd0 00:26:41.213 19:11:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:41.213 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:41.213 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:41.213 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:41.213 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:41.214 1+0 records in 00:26:41.214 1+0 records out 00:26:41.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061727 s, 6.6 MB/s 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:41.214 19:11:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:41.214 [2024-11-26 19:11:12.353119] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:26:41.214 [2024-11-26 19:11:12.353293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81345 ] 00:26:41.473 [2024-11-26 19:11:12.531002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.473 [2024-11-26 19:11:12.655234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.851  [2024-11-26T19:11:15.440Z] Copying: 154/1024 [MB] (154 MBps) [2024-11-26T19:11:16.375Z] Copying: 312/1024 [MB] (158 MBps) [2024-11-26T19:11:17.309Z] Copying: 471/1024 [MB] (158 MBps) [2024-11-26T19:11:18.243Z] Copying: 631/1024 [MB] (159 MBps) [2024-11-26T19:11:19.178Z] Copying: 788/1024 [MB] (156 MBps) [2024-11-26T19:11:19.744Z] Copying: 942/1024 [MB] (153 MBps) [2024-11-26T19:11:20.681Z] Copying: 1024/1024 [MB] (average 156 MBps) 00:26:49.466 00:26:49.466 19:11:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:51.996 19:11:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:51.996 [2024-11-26 19:11:22.914870] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:26:51.996 [2024-11-26 19:11:22.915261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81446 ] 00:26:51.996 [2024-11-26 19:11:23.132708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.255 [2024-11-26 19:11:23.271426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.626  [2024-11-26T19:11:25.775Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-26T19:11:26.709Z] Copying: 32/1024 [MB] (15 MBps) [2024-11-26T19:11:27.640Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-26T19:11:28.571Z] Copying: 65/1024 [MB] (16 MBps) [2024-11-26T19:11:29.942Z] Copying: 84/1024 [MB] (18 MBps) [2024-11-26T19:11:30.874Z] Copying: 99/1024 [MB] (14 MBps) [2024-11-26T19:11:31.837Z] Copying: 114/1024 [MB] (15 MBps) [2024-11-26T19:11:32.772Z] Copying: 130/1024 [MB] (16 MBps) [2024-11-26T19:11:33.707Z] Copying: 147/1024 [MB] (16 MBps) [2024-11-26T19:11:34.641Z] Copying: 163/1024 [MB] (15 MBps) [2024-11-26T19:11:35.578Z] Copying: 180/1024 [MB] (17 MBps) [2024-11-26T19:11:36.953Z] Copying: 197/1024 [MB] (16 MBps) [2024-11-26T19:11:37.903Z] Copying: 211/1024 [MB] (14 MBps) [2024-11-26T19:11:38.837Z] Copying: 230/1024 [MB] (18 MBps) [2024-11-26T19:11:39.775Z] Copying: 247/1024 [MB] (17 MBps) [2024-11-26T19:11:40.711Z] Copying: 266/1024 [MB] (18 MBps) [2024-11-26T19:11:41.646Z] Copying: 279/1024 [MB] (13 MBps) [2024-11-26T19:11:42.581Z] Copying: 296/1024 [MB] (16 MBps) [2024-11-26T19:11:43.955Z] Copying: 313/1024 [MB] (17 MBps) [2024-11-26T19:11:44.893Z] Copying: 329/1024 [MB] (15 MBps) [2024-11-26T19:11:45.829Z] Copying: 344/1024 [MB] (15 MBps) [2024-11-26T19:11:46.762Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-26T19:11:47.697Z] Copying: 377/1024 [MB] (15 MBps) [2024-11-26T19:11:48.632Z] Copying: 394/1024 [MB] (16 MBps) [2024-11-26T19:11:49.567Z] Copying: 410/1024 [MB] (15 MBps) [2024-11-26T19:11:50.943Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-26T19:11:51.879Z] Copying: 443/1024 [MB] (16 MBps) [2024-11-26T19:11:52.814Z] Copying: 458/1024 [MB] (15 MBps) [2024-11-26T19:11:53.748Z] Copying: 475/1024 [MB] (17 MBps) [2024-11-26T19:11:54.682Z] Copying: 492/1024 [MB] (16 MBps) [2024-11-26T19:11:55.627Z] Copying: 510/1024 [MB] (17 MBps) [2024-11-26T19:11:56.642Z] Copying: 527/1024 [MB] (17 MBps) [2024-11-26T19:11:57.574Z] Copying: 544/1024 [MB] (16 MBps) [2024-11-26T19:11:59.071Z] Copying: 560/1024 [MB] (16 MBps) [2024-11-26T19:11:59.637Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-26T19:12:00.570Z] Copying: 595/1024 [MB] (17 MBps) [2024-11-26T19:12:01.948Z] Copying: 612/1024 [MB] (17 MBps) [2024-11-26T19:12:02.883Z] Copying: 629/1024 [MB] (16 MBps) [2024-11-26T19:12:03.818Z] Copying: 644/1024 [MB] (15 MBps) [2024-11-26T19:12:04.754Z] Copying: 659/1024 [MB] (15 MBps) [2024-11-26T19:12:05.688Z] Copying: 677/1024 [MB] (18 MBps) [2024-11-26T19:12:06.621Z] Copying: 694/1024 [MB] (16 MBps) [2024-11-26T19:12:07.559Z] Copying: 712/1024 [MB] (18 MBps) [2024-11-26T19:12:08.932Z] Copying: 730/1024 [MB] (17 MBps) [2024-11-26T19:12:09.867Z] Copying: 748/1024 [MB] (17 MBps) [2024-11-26T19:12:10.871Z] Copying: 765/1024 [MB] (17 MBps) [2024-11-26T19:12:11.806Z] Copying: 782/1024 [MB] (17 MBps) [2024-11-26T19:12:12.740Z] Copying: 799/1024 [MB] (16 MBps) [2024-11-26T19:12:13.675Z] Copying: 816/1024 [MB] (17 MBps) [2024-11-26T19:12:14.608Z] Copying: 831/1024 [MB] (14 MBps) [2024-11-26T19:12:15.982Z] Copying: 849/1024 [MB] (18 MBps) [2024-11-26T19:12:16.917Z] Copying: 867/1024 [MB] (17 MBps) [2024-11-26T19:12:17.852Z] Copying: 884/1024 [MB] (17 MBps) [2024-11-26T19:12:18.802Z] Copying: 903/1024 [MB] (18 MBps) [2024-11-26T19:12:19.737Z] Copying: 919/1024 [MB] (16 MBps) [2024-11-26T19:12:20.672Z] Copying: 937/1024 [MB] (17 MBps) [2024-11-26T19:12:21.605Z] Copying: 955/1024 [MB] (18 MBps) [2024-11-26T19:12:22.979Z] Copying: 974/1024 [MB] (18 MBps) [2024-11-26T19:12:23.915Z] Copying: 992/1024 [MB] (18 MBps) [2024-11-26T19:12:24.515Z] Copying: 1009/1024 [MB] (16 MBps) [2024-11-26T19:12:25.890Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:27:54.675 00:27:54.675 19:12:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:54.675 19:12:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:54.675 19:12:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:54.932 [2024-11-26 19:12:26.122099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.932 [2024-11-26 19:12:26.122192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:54.932 [2024-11-26 19:12:26.122229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:54.932 [2024-11-26 19:12:26.122257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-26 19:12:26.122318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:54.932 [2024-11-26 19:12:26.125998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.932 [2024-11-26 19:12:26.126046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:54.932 [2024-11-26 19:12:26.126077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:27:54.932 [2024-11-26 19:12:26.126102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-26 19:12:26.127756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.932 [2024-11-26 19:12:26.127816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:54.932 [2024-11-26 19:12:26.127852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:27:54.932 [2024-11-26 19:12:26.127877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.932 [2024-11-26 19:12:26.143978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.932 [2024-11-26 19:12:26.144068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:54.932 [2024-11-26 19:12:26.144107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.031 ms 00:27:54.932 [2024-11-26 19:12:26.144128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.151026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.151103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:55.192 [2024-11-26 19:12:26.151137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.764 ms 00:27:55.192 [2024-11-26 19:12:26.151158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.183842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.183928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:55.192 [2024-11-26 19:12:26.183965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.430 ms 00:27:55.192 [2024-11-26 19:12:26.183986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.203352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.203444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:55.192 [2024-11-26 19:12:26.203487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.211 ms 00:27:55.192 [2024-11-26 19:12:26.203509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.203973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.204021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:55.192 [2024-11-26 19:12:26.204059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:27:55.192 [2024-11-26 19:12:26.204082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.237036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.237125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:55.192 [2024-11-26 19:12:26.237163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.884 ms 00:27:55.192 [2024-11-26 19:12:26.237199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.270751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.270867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:55.192 [2024-11-26 19:12:26.270909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.407 ms 00:27:55.192 [2024-11-26 19:12:26.270936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.192 [2024-11-26 19:12:26.304125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.192 [2024-11-26 19:12:26.304239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:55.193 [2024-11-26 19:12:26.304279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.007 ms 00:27:55.193 [2024-11-26 19:12:26.304300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.193 [2024-11-26 19:12:26.336850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.193 [2024-11-26 19:12:26.336939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:55.193 [2024-11-26 19:12:26.336976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.209 ms 00:27:55.193 [2024-11-26 19:12:26.336996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.193 [2024-11-26 19:12:26.337151] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:55.193 [2024-11-26 19:12:26.337208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.337993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.338997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:55.193 [2024-11-26 19:12:26.339367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:55.194 [2024-11-26 19:12:26.339818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:55.194 [2024-11-26 19:12:26.339845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 023b4b4f-bcf0-4338-8c93-3af230e4a41f 00:27:55.194 [2024-11-26 19:12:26.339866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:55.194 [2024-11-26 19:12:26.339887] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:55.194 [2024-11-26 19:12:26.339908] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:55.194 [2024-11-26 19:12:26.339922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:55.194 [2024-11-26 19:12:26.339934] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:55.194 [2024-11-26 19:12:26.339949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:55.194 [2024-11-26 19:12:26.339961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:55.194 [2024-11-26 19:12:26.339975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:55.194 [2024-11-26 19:12:26.339986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:55.194 [2024-11-26 19:12:26.340001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.194 [2024-11-26 19:12:26.340013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:55.194 [2024-11-26 19:12:26.340029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:27:55.194 [2024-11-26 19:12:26.340041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.194 [2024-11-26 19:12:26.357235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.194 [2024-11-26 19:12:26.357308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:55.194 [2024-11-26 19:12:26.357331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.083 ms 00:27:55.194 [2024-11-26 19:12:26.357344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.194 [2024-11-26 19:12:26.357819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.194 [2024-11-26 19:12:26.357848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:55.194 [2024-11-26 19:12:26.357865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:27:55.194 [2024-11-26 19:12:26.357878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.413842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.413924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.452 [2024-11-26 19:12:26.413948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.413961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.414058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.414074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.452 [2024-11-26 19:12:26.414089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.414101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.414289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.414314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.452 [2024-11-26 19:12:26.414330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.414342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.414376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.414390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.452 [2024-11-26 19:12:26.414404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.414416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.520665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.520761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.452 [2024-11-26 19:12:26.520789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.520803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.608086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.608161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.452 [2024-11-26 19:12:26.608214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.608228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.608376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.608396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:55.452 [2024-11-26 19:12:26.608415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.608427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.608504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.608522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:55.452 [2024-11-26 19:12:26.608537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.608566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.608702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.452 [2024-11-26 19:12:26.608722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:55.452 [2024-11-26 19:12:26.608737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.452 [2024-11-26 19:12:26.608752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.452 [2024-11-26 19:12:26.608813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.453 [2024-11-26 19:12:26.608832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:55.453 [2024-11-26 19:12:26.608847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.453 [2024-11-26 19:12:26.608859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.453 [2024-11-26 19:12:26.608910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.453 [2024-11-26 19:12:26.608925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:55.453 [2024-11-26 19:12:26.608939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.453 [2024-11-26 19:12:26.608954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.453 [2024-11-26 19:12:26.609015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.453 [2024-11-26 19:12:26.609032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:55.453 [2024-11-26 19:12:26.609047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.453 [2024-11-26 19:12:26.609058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.453 [2024-11-26 19:12:26.609240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.090 ms, result 0 00:27:55.453 true 00:27:55.453 19:12:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81197 00:27:55.453 19:12:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81197 00:27:55.453 19:12:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:55.711 [2024-11-26 19:12:26.728363] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:27:55.711 [2024-11-26 19:12:26.728521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82071 ] 00:27:55.711 [2024-11-26 19:12:26.910144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.969 [2024-11-26 19:12:27.015053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.343  [2024-11-26T19:12:29.492Z] Copying: 161/1024 [MB] (161 MBps) [2024-11-26T19:12:30.426Z] Copying: 324/1024 [MB] (162 MBps) [2024-11-26T19:12:31.358Z] Copying: 483/1024 [MB] (158 MBps) [2024-11-26T19:12:32.731Z] Copying: 633/1024 [MB] (150 MBps) [2024-11-26T19:12:33.665Z] Copying: 786/1024 [MB] (153 MBps) [2024-11-26T19:12:33.923Z] Copying: 937/1024 [MB] (150 MBps) [2024-11-26T19:12:35.297Z] Copying: 1024/1024 [MB] (average 156 MBps) 00:28:04.082 00:28:04.082 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81197 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:04.083 19:12:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:04.083 [2024-11-26 19:12:35.061958] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:28:04.083 [2024-11-26 19:12:35.062159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82152 ] 00:28:04.083 [2024-11-26 19:12:35.264408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.341 [2024-11-26 19:12:35.390407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.599 [2024-11-26 19:12:35.727929] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.599 [2024-11-26 19:12:35.728018] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.599 [2024-11-26 19:12:35.795732] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:04.599 [2024-11-26 19:12:35.796133] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:04.599 [2024-11-26 19:12:35.796380] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:04.858 [2024-11-26 19:12:36.050075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.858 [2024-11-26 19:12:36.050158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:04.858 [2024-11-26 19:12:36.050200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:04.858 [2024-11-26 19:12:36.050218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.858 [2024-11-26 19:12:36.050300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.858 [2024-11-26 19:12:36.050319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.858 [2024-11-26 19:12:36.050331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:04.858 [2024-11-26 19:12:36.050343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.858 [2024-11-26 19:12:36.050374] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:04.858 [2024-11-26 19:12:36.051361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:04.858 [2024-11-26 19:12:36.051405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.858 [2024-11-26 19:12:36.051418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.858 [2024-11-26 19:12:36.051431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:28:04.858 [2024-11-26 19:12:36.051443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.858 [2024-11-26 19:12:36.052711] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:04.858 [2024-11-26 19:12:36.069336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.858 [2024-11-26 19:12:36.069423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:04.858 [2024-11-26 19:12:36.069443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.622 ms 00:28:04.858 [2024-11-26 19:12:36.069457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.858 [2024-11-26 19:12:36.069597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.858 [2024-11-26 19:12:36.069618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:04.858 [2024-11-26 19:12:36.069632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:04.858 [2024-11-26 19:12:36.069643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.074447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.074511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.117 [2024-11-26 19:12:36.074530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:28:05.117 [2024-11-26 19:12:36.074542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.074664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.074685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.117 [2024-11-26 19:12:36.074698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:05.117 [2024-11-26 19:12:36.074709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.074795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.074814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:05.117 [2024-11-26 19:12:36.074827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:05.117 [2024-11-26 19:12:36.074839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.074875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:05.117 [2024-11-26 19:12:36.079194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.079247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.117 [2024-11-26 19:12:36.079263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.329 ms 00:28:05.117 [2024-11-26 19:12:36.079274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.079321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.079335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:05.117 [2024-11-26 19:12:36.079349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:05.117 [2024-11-26 19:12:36.079360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.079427] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:05.117 [2024-11-26 19:12:36.079469] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:05.117 [2024-11-26 19:12:36.079513] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:05.117 [2024-11-26 19:12:36.079535] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:05.117 [2024-11-26 19:12:36.079663] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:05.117 [2024-11-26 19:12:36.079691] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:05.117 [2024-11-26 19:12:36.079707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:05.117 [2024-11-26 19:12:36.079729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:05.117 [2024-11-26 19:12:36.079743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:05.117 [2024-11-26 19:12:36.079755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:05.117 [2024-11-26 19:12:36.079766] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:05.117 [2024-11-26 19:12:36.079777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:05.117 [2024-11-26 19:12:36.079788] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:05.117 [2024-11-26 19:12:36.079812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.079824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:05.117 [2024-11-26 19:12:36.079836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:28:05.117 [2024-11-26 19:12:36.079847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.079948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.117 [2024-11-26 19:12:36.079969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:05.117 [2024-11-26 19:12:36.079981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:05.117 [2024-11-26 19:12:36.079993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.117 [2024-11-26 19:12:36.080143] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:05.117 [2024-11-26 19:12:36.080192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:05.117 [2024-11-26 19:12:36.080208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.117 [2024-11-26 19:12:36.080220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:05.117 [2024-11-26 19:12:36.080243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:05.117 [2024-11-26 19:12:36.080265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:05.117 [2024-11-26 19:12:36.080275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.117 [2024-11-26 19:12:36.080312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:05.117 [2024-11-26 19:12:36.080322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:05.117 [2024-11-26 19:12:36.080332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.117 [2024-11-26 19:12:36.080343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:05.117 [2024-11-26 19:12:36.080354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:05.117 [2024-11-26 19:12:36.080366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:05.117 [2024-11-26 19:12:36.080388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:05.117 [2024-11-26 19:12:36.080398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:05.117 [2024-11-26 19:12:36.080419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.117 [2024-11-26 19:12:36.080440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:05.117 [2024-11-26 19:12:36.080450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.117 [2024-11-26 19:12:36.080473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:05.117 [2024-11-26 19:12:36.080484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:05.117 [2024-11-26 19:12:36.080494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.118 [2024-11-26 19:12:36.080505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:05.118 [2024-11-26 19:12:36.080515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:05.118 [2024-11-26 19:12:36.080525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.118 [2024-11-26 19:12:36.080536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:05.118 [2024-11-26 19:12:36.080546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:05.118 [2024-11-26 19:12:36.080556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.118 [2024-11-26 19:12:36.080567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:05.118 [2024-11-26 19:12:36.080577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:05.118 [2024-11-26 19:12:36.080587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.118 [2024-11-26 19:12:36.080598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:05.118 [2024-11-26 19:12:36.080608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:05.118 [2024-11-26 19:12:36.080619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.118 [2024-11-26 19:12:36.080629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:05.118 [2024-11-26 19:12:36.080640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:05.118 [2024-11-26 19:12:36.080651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.118 [2024-11-26 19:12:36.080661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:05.118 [2024-11-26 19:12:36.080672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:05.118 [2024-11-26 19:12:36.080689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.118 [2024-11-26 19:12:36.080700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.118 [2024-11-26 19:12:36.080713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:05.118 [2024-11-26 19:12:36.080723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:05.118 [2024-11-26 19:12:36.080734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:05.118 [2024-11-26 19:12:36.080745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:05.118 [2024-11-26 19:12:36.080755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:05.118 [2024-11-26 19:12:36.080766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:05.118 [2024-11-26 19:12:36.080779] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:05.118 [2024-11-26 19:12:36.080793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.080807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:05.118 [2024-11-26 19:12:36.080818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:05.118 [2024-11-26 19:12:36.080829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:05.118 [2024-11-26 19:12:36.080840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:05.118 [2024-11-26 19:12:36.080851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:05.118 [2024-11-26 19:12:36.080863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:05.118 [2024-11-26 19:12:36.080874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:05.118 [2024-11-26 19:12:36.080886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:05.118 [2024-11-26 19:12:36.080897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:05.118 [2024-11-26 19:12:36.080908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.080920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.080931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.080943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.080954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:05.118 [2024-11-26 19:12:36.080966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:05.118 [2024-11-26 19:12:36.080978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.080991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:05.118 [2024-11-26 19:12:36.081002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:05.118 [2024-11-26 19:12:36.081014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:05.118 [2024-11-26 19:12:36.081026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:05.118 [2024-11-26 19:12:36.081038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.081049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:05.118 [2024-11-26 19:12:36.081061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:28:05.118 [2024-11-26 19:12:36.081073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.114138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.114215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:05.118 [2024-11-26 19:12:36.114237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.996 ms 00:28:05.118 [2024-11-26 19:12:36.114250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.114384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.114409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:05.118 [2024-11-26 19:12:36.114423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:05.118 [2024-11-26 19:12:36.114434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.175010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.175099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:05.118 [2024-11-26 19:12:36.175127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.465 ms 00:28:05.118 [2024-11-26 19:12:36.175139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.175246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.175265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:05.118 [2024-11-26 19:12:36.175278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:05.118 [2024-11-26 19:12:36.175290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.175757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.175805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:05.118 [2024-11-26 19:12:36.175822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:28:05.118 [2024-11-26 19:12:36.175842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.176012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.176042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:05.118 [2024-11-26 19:12:36.176056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:28:05.118 [2024-11-26 19:12:36.176068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.193574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.193654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:05.118 [2024-11-26 19:12:36.193676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.473 ms 00:28:05.118 [2024-11-26 19:12:36.193688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.210962] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:05.118 [2024-11-26 19:12:36.211052] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:05.118 [2024-11-26 19:12:36.211077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.211090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:05.118 [2024-11-26 19:12:36.211107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.176 ms 00:28:05.118 [2024-11-26 19:12:36.211118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.245159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.245261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:05.118 [2024-11-26 19:12:36.245282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.880 ms 00:28:05.118 [2024-11-26 19:12:36.245296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.262251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.262347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:05.118 [2024-11-26 19:12:36.262369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.821 ms 00:28:05.118 [2024-11-26 19:12:36.262381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.279151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.279238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:05.118 [2024-11-26 19:12:36.279258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.675 ms 00:28:05.118 [2024-11-26 19:12:36.279272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.118 [2024-11-26 19:12:36.280315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.118 [2024-11-26 19:12:36.280355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:05.118 [2024-11-26 19:12:36.280373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:28:05.118 [2024-11-26 19:12:36.280385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.357276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.357374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:05.379 [2024-11-26 19:12:36.357396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.856 ms 00:28:05.379 [2024-11-26 19:12:36.357409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.370691] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:05.379 [2024-11-26 19:12:36.373586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.373643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:05.379 [2024-11-26 19:12:36.373670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.069 ms 00:28:05.379 [2024-11-26 19:12:36.373682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.373828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.373848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:05.379 [2024-11-26 19:12:36.373862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:05.379 [2024-11-26 19:12:36.373873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.373971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.373990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:05.379 [2024-11-26 19:12:36.374003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:05.379 [2024-11-26 19:12:36.374020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.374053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.374068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:05.379 [2024-11-26 19:12:36.374079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:05.379 [2024-11-26 19:12:36.374090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.374132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:05.379 [2024-11-26 19:12:36.374164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.374201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:05.379 [2024-11-26 19:12:36.374220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:05.379 [2024-11-26 19:12:36.374231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.407091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.407189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:05.379 [2024-11-26 19:12:36.407212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.823 ms 00:28:05.379 [2024-11-26 19:12:36.407226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.407395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.379 [2024-11-26 19:12:36.407416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:05.379 [2024-11-26 19:12:36.407429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:05.379 [2024-11-26 19:12:36.407445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.379 [2024-11-26 19:12:36.408755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.093 ms, result 0 00:28:06.311  [2024-11-26T19:12:38.475Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T19:12:39.859Z] Copying: 59/1024 [MB] (31 MBps) [2024-11-26T19:12:40.426Z] Copying: 90/1024 [MB] (31 MBps) [2024-11-26T19:12:41.801Z] Copying: 120/1024 [MB] (30 MBps) [2024-11-26T19:12:42.735Z] Copying: 150/1024 [MB] (29 MBps) [2024-11-26T19:12:43.669Z] Copying: 181/1024 [MB] (31 MBps) [2024-11-26T19:12:44.604Z] Copying: 211/1024 [MB] (30 MBps) [2024-11-26T19:12:45.556Z] Copying: 243/1024 [MB] (31 MBps) [2024-11-26T19:12:46.488Z] Copying: 273/1024 [MB] (30 MBps) [2024-11-26T19:12:47.860Z] Copying: 303/1024 [MB] (29 MBps) [2024-11-26T19:12:48.426Z] Copying: 331/1024 [MB] (28 MBps) [2024-11-26T19:12:49.799Z] Copying: 361/1024 [MB] (29 MBps) [2024-11-26T19:12:50.734Z] Copying: 389/1024 [MB] (28 MBps) [2024-11-26T19:12:51.669Z] Copying: 417/1024 [MB] (27 MBps) [2024-11-26T19:12:52.605Z] Copying: 445/1024 [MB] (28 MBps) [2024-11-26T19:12:53.541Z] Copying: 474/1024 [MB] (28 MBps) [2024-11-26T19:12:54.475Z] Copying: 504/1024 [MB] (30 MBps) [2024-11-26T19:12:55.852Z] Copying: 535/1024 [MB] (30 MBps) [2024-11-26T19:12:56.788Z] Copying: 564/1024 [MB] (29 MBps) [2024-11-26T19:12:57.723Z] Copying: 595/1024 [MB] (30 MBps) [2024-11-26T19:12:58.656Z] Copying: 623/1024 [MB] (28 MBps) [2024-11-26T19:12:59.591Z] Copying: 654/1024 [MB] (30 MBps) [2024-11-26T19:13:00.523Z] Copying: 684/1024 [MB] (29 MBps) [2024-11-26T19:13:01.458Z] Copying: 714/1024 [MB] (30 MBps) [2024-11-26T19:13:02.867Z] Copying: 741/1024 [MB] (27 MBps) [2024-11-26T19:13:03.434Z] Copying: 770/1024 [MB] (29 MBps) [2024-11-26T19:13:04.805Z] Copying: 801/1024 [MB] (30 MBps) [2024-11-26T19:13:05.740Z] Copying: 829/1024 [MB] (28 MBps) [2024-11-26T19:13:06.673Z] Copying: 860/1024 [MB] (30 MBps) [2024-11-26T19:13:07.606Z] Copying: 890/1024 [MB] (30 MBps) [2024-11-26T19:13:08.540Z] Copying: 920/1024 [MB] (30 MBps) [2024-11-26T19:13:09.472Z] Copying: 947/1024 [MB] (26 MBps) [2024-11-26T19:13:10.848Z] Copying: 976/1024 [MB] (29 MBps) [2024-11-26T19:13:11.783Z] Copying: 1003/1024 [MB] (26 MBps) [2024-11-26T19:13:12.719Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-26T19:13:12.719Z] Copying: 1048496/1048576 [kB] (760 kBps) [2024-11-26T19:13:12.719Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-26 19:13:12.549097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.549205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:41.504 [2024-11-26 19:13:12.549253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:41.504 [2024-11-26 19:13:12.549285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.504 [2024-11-26 19:13:12.550559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:41.504 [2024-11-26 19:13:12.556407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.556482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:41.504 [2024-11-26 19:13:12.556532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.762 ms 00:28:41.504 [2024-11-26 19:13:12.556552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.504 [2024-11-26 19:13:12.570345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.570442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:41.504 [2024-11-26 19:13:12.570472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.995 ms 00:28:41.504 [2024-11-26 19:13:12.570491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.504 [2024-11-26 19:13:12.591773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.591898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:41.504 [2024-11-26 19:13:12.591937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.201 ms 00:28:41.504 [2024-11-26 19:13:12.591957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.504 [2024-11-26 19:13:12.598856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.598926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:41.504 [2024-11-26 19:13:12.598958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.743 ms 00:28:41.504 [2024-11-26 19:13:12.598980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.504 [2024-11-26 19:13:12.631472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.631565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:41.504 [2024-11-26 19:13:12.631597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.381 ms 00:28:41.504 [2024-11-26 19:13:12.631616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.504 [2024-11-26 19:13:12.649966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.504 [2024-11-26 19:13:12.650084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:41.504 [2024-11-26 19:13:12.650116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.167 ms 00:28:41.504 [2024-11-26 19:13:12.650139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.764 [2024-11-26 19:13:12.725411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.764 [2024-11-26 19:13:12.725538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:41.764 [2024-11-26 19:13:12.725578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.071 ms 00:28:41.764 [2024-11-26 19:13:12.725599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.764 [2024-11-26 19:13:12.758737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.764 [2024-11-26 19:13:12.758843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:41.764 [2024-11-26 19:13:12.758878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.071 ms 00:28:41.764 [2024-11-26 19:13:12.758928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.764 [2024-11-26 19:13:12.791325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.764 [2024-11-26 19:13:12.791421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:41.764 [2024-11-26 19:13:12.791454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.244 ms 00:28:41.764 [2024-11-26 19:13:12.791472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.764 [2024-11-26 19:13:12.823751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.764 [2024-11-26 19:13:12.823854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:41.764 [2024-11-26 19:13:12.823888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.166 ms 00:28:41.764 [2024-11-26 19:13:12.823907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.764 [2024-11-26 19:13:12.856241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.764 [2024-11-26 19:13:12.856355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:41.764 [2024-11-26 19:13:12.856386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.117 ms 00:28:41.764 [2024-11-26 19:13:12.856405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.764 [2024-11-26 19:13:12.856516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:41.764 [2024-11-26 19:13:12.856555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:28:41.764 [2024-11-26 19:13:12.856600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.856999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:41.764 [2024-11-26 19:13:12.857813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.857998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:41.765 [2024-11-26 19:13:12.858736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:41.765 [2024-11-26 19:13:12.858789] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 023b4b4f-bcf0-4338-8c93-3af230e4a41f 00:28:41.765 [2024-11-26 19:13:12.858844] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:28:41.765 [2024-11-26 19:13:12.858860] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:28:41.765 [2024-11-26 19:13:12.858876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:28:41.765 [2024-11-26 19:13:12.858898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:28:41.765 [2024-11-26 19:13:12.858918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:41.765 [2024-11-26 19:13:12.858938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:41.765 [2024-11-26 19:13:12.858960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:41.765 [2024-11-26 19:13:12.858979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:41.765 [2024-11-26 19:13:12.858999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:41.765 [2024-11-26 19:13:12.859031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.765 [2024-11-26 19:13:12.859053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:41.765 [2024-11-26 19:13:12.859076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:28:41.765 [2024-11-26 19:13:12.859097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.765 [2024-11-26 19:13:12.876241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.765 [2024-11-26 19:13:12.876326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:41.765 [2024-11-26 19:13:12.876357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.973 ms 00:28:41.765 [2024-11-26 19:13:12.876376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.765 [2024-11-26 19:13:12.876952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.765 [2024-11-26 19:13:12.876999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:41.765 [2024-11-26 19:13:12.877035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:28:41.765 [2024-11-26 19:13:12.877057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.765 [2024-11-26 19:13:12.920892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.765 [2024-11-26 19:13:12.920980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:41.765 [2024-11-26 19:13:12.921010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.765 [2024-11-26 19:13:12.921028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.765 [2024-11-26 19:13:12.921146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.765 [2024-11-26 19:13:12.921191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:41.765 [2024-11-26 19:13:12.921230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.765 [2024-11-26 19:13:12.921252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.765 [2024-11-26 19:13:12.921396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.765 [2024-11-26 19:13:12.921433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:41.765 [2024-11-26 19:13:12.921459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.765 [2024-11-26 19:13:12.921479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.765 [2024-11-26 19:13:12.921514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.765 [2024-11-26 19:13:12.921560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:41.765 [2024-11-26 19:13:12.921583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.765 [2024-11-26 19:13:12.921616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.025992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.026085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:42.024 [2024-11-26 19:13:13.026115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.026132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.113044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.113140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:42.024 [2024-11-26 19:13:13.113214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.113235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.113387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.113417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:42.024 [2024-11-26 19:13:13.113440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.113461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.113539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.113575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:42.024 [2024-11-26 19:13:13.113599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.113618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.113808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.113845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:42.024 [2024-11-26 19:13:13.113871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.113892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.113975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.114006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:42.024 [2024-11-26 19:13:13.114037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.114059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.114153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.114207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:42.024 [2024-11-26 19:13:13.114239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.114263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.114340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:42.024 [2024-11-26 19:13:13.114369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:42.024 [2024-11-26 19:13:13.114394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:42.024 [2024-11-26 19:13:13.114415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.024 [2024-11-26 19:13:13.114637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.762 ms, result 0 00:28:43.400 00:28:43.400 00:28:43.400 19:13:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:45.930 19:13:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:45.930 [2024-11-26 19:13:16.871251] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:28:45.930 [2024-11-26 19:13:16.871413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82554 ] 00:28:45.930 [2024-11-26 19:13:17.096358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.188 [2024-11-26 19:13:17.261108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.447 [2024-11-26 19:13:17.627011] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:46.447 [2024-11-26 19:13:17.627092] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:46.706 [2024-11-26 19:13:17.789850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.706 [2024-11-26 19:13:17.789937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:46.706 [2024-11-26 19:13:17.789958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:46.706 [2024-11-26 19:13:17.789970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.706 [2024-11-26 19:13:17.790059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.706 [2024-11-26 19:13:17.790080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.706 [2024-11-26 19:13:17.790092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:46.706 [2024-11-26 19:13:17.790103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.706 [2024-11-26 19:13:17.790148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:46.706 [2024-11-26 19:13:17.791253] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:46.706 [2024-11-26 19:13:17.791297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.706 [2024-11-26 19:13:17.791312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.706 [2024-11-26 19:13:17.791325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:28:46.706 [2024-11-26 19:13:17.791336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.706 [2024-11-26 19:13:17.792701] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:46.706 [2024-11-26 19:13:17.809871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.706 [2024-11-26 19:13:17.809955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:46.706 [2024-11-26 19:13:17.809976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.165 ms 00:28:46.706 [2024-11-26 19:13:17.809987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.706 [2024-11-26 19:13:17.810123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.706 [2024-11-26 19:13:17.810143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:46.706 [2024-11-26 19:13:17.810156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:46.707 [2024-11-26 19:13:17.810168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.815062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.815131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.707 [2024-11-26 19:13:17.815149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:28:46.707 [2024-11-26 19:13:17.815181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.815301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.815323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.707 [2024-11-26 19:13:17.815335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:46.707 [2024-11-26 19:13:17.815345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.815424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.815441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:46.707 [2024-11-26 19:13:17.815453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:46.707 [2024-11-26 19:13:17.815464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.815505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:46.707 [2024-11-26 19:13:17.819791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.819860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.707 [2024-11-26 19:13:17.819882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.296 ms 00:28:46.707 [2024-11-26 19:13:17.819893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.819941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.819956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:46.707 [2024-11-26 19:13:17.819969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:46.707 [2024-11-26 19:13:17.819979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.820036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:46.707 [2024-11-26 19:13:17.820068] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:46.707 [2024-11-26 19:13:17.820123] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:46.707 [2024-11-26 19:13:17.820146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:46.707 [2024-11-26 19:13:17.820277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:46.707 [2024-11-26 19:13:17.820305] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:46.707 [2024-11-26 19:13:17.820320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:46.707 [2024-11-26 19:13:17.820335] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:46.707 [2024-11-26 19:13:17.820348] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:46.707 [2024-11-26 19:13:17.820360] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:46.707 [2024-11-26 19:13:17.820371] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:46.707 [2024-11-26 19:13:17.820386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:46.707 [2024-11-26 19:13:17.820396] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:46.707 [2024-11-26 19:13:17.820408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.820418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:46.707 [2024-11-26 19:13:17.820430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:28:46.707 [2024-11-26 19:13:17.820440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.820544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.707 [2024-11-26 19:13:17.820568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:46.707 [2024-11-26 19:13:17.820580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:46.707 [2024-11-26 19:13:17.820591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.707 [2024-11-26 19:13:17.820750] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:46.707 [2024-11-26 19:13:17.820781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:46.707 [2024-11-26 19:13:17.820794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.707 [2024-11-26 19:13:17.820806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.707 [2024-11-26 19:13:17.820817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:46.707 [2024-11-26 19:13:17.820828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:46.707 [2024-11-26 19:13:17.820838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:46.707 [2024-11-26 19:13:17.820848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:46.707 [2024-11-26 19:13:17.820858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:46.707 [2024-11-26 19:13:17.820868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.707 [2024-11-26 19:13:17.820878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:46.707 [2024-11-26 19:13:17.820888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:46.707 [2024-11-26 19:13:17.820897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.707 [2024-11-26 19:13:17.820921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:46.707 [2024-11-26 19:13:17.820932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:46.707 [2024-11-26 19:13:17.820942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.707 [2024-11-26 19:13:17.820952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:46.707 [2024-11-26 19:13:17.820962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:46.707 [2024-11-26 19:13:17.820977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.707 [2024-11-26 19:13:17.820987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:46.707 [2024-11-26 19:13:17.820997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:46.707 [2024-11-26 19:13:17.821008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.707 [2024-11-26 19:13:17.821017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:46.707 [2024-11-26 19:13:17.821027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:46.707 [2024-11-26 19:13:17.821037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.707 [2024-11-26 19:13:17.821047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:46.707 [2024-11-26 19:13:17.821057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:46.707 [2024-11-26 19:13:17.821066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.707 [2024-11-26 19:13:17.821076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:46.707 [2024-11-26 19:13:17.821086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:46.707 [2024-11-26 19:13:17.821096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.707 [2024-11-26 19:13:17.821105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:46.707 [2024-11-26 19:13:17.821115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:46.707 [2024-11-26 19:13:17.821125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.707 [2024-11-26 19:13:17.821135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:46.708 [2024-11-26 19:13:17.821145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:46.708 [2024-11-26 19:13:17.821154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.708 [2024-11-26 19:13:17.821164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:46.708 [2024-11-26 19:13:17.821193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:46.708 [2024-11-26 19:13:17.821204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.708 [2024-11-26 19:13:17.821214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:46.708 [2024-11-26 19:13:17.821224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:46.708 [2024-11-26 19:13:17.821234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.708 [2024-11-26 19:13:17.821244] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:46.708 [2024-11-26 19:13:17.821255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:46.708 [2024-11-26 19:13:17.821266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.708 [2024-11-26 19:13:17.821278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.708 [2024-11-26 19:13:17.821289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:46.708 [2024-11-26 19:13:17.821299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:46.708 [2024-11-26 19:13:17.821309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:46.708 [2024-11-26 19:13:17.821320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:46.708 [2024-11-26 19:13:17.821330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:46.708 [2024-11-26 19:13:17.821341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:46.708 [2024-11-26 19:13:17.821352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:46.708 [2024-11-26 19:13:17.821366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:46.708 [2024-11-26 19:13:17.821394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:46.708 [2024-11-26 19:13:17.821405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:46.708 [2024-11-26 19:13:17.821416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:46.708 [2024-11-26 19:13:17.821426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:46.708 [2024-11-26 19:13:17.821437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:46.708 [2024-11-26 19:13:17.821447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:46.708 [2024-11-26 19:13:17.821458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:46.708 [2024-11-26 19:13:17.821469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:46.708 [2024-11-26 19:13:17.821480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:46.708 [2024-11-26 19:13:17.821535] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:46.708 [2024-11-26 19:13:17.821548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:46.708 [2024-11-26 19:13:17.821571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:46.708 [2024-11-26 19:13:17.821581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:46.708 [2024-11-26 19:13:17.821592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:46.708 [2024-11-26 19:13:17.821604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.821615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:46.708 [2024-11-26 19:13:17.821627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:28:46.708 [2024-11-26 19:13:17.821638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.708 [2024-11-26 19:13:17.854835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.854905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.708 [2024-11-26 19:13:17.854924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.133 ms 00:28:46.708 [2024-11-26 19:13:17.854941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.708 [2024-11-26 19:13:17.855058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.855074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:46.708 [2024-11-26 19:13:17.855086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:46.708 [2024-11-26 19:13:17.855098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.708 [2024-11-26 19:13:17.905785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.905859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.708 [2024-11-26 19:13:17.905880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.537 ms 00:28:46.708 [2024-11-26 19:13:17.905891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.708 [2024-11-26 19:13:17.905974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.905991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.708 [2024-11-26 19:13:17.906011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:46.708 [2024-11-26 19:13:17.906023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.708 [2024-11-26 19:13:17.906502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.906533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.708 [2024-11-26 19:13:17.906548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:28:46.708 [2024-11-26 19:13:17.906559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.708 [2024-11-26 19:13:17.906739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.708 [2024-11-26 19:13:17.906773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.708 [2024-11-26 19:13:17.906801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:28:46.708 [2024-11-26 19:13:17.906819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:17.925230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:17.925314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.967 [2024-11-26 19:13:17.925335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.365 ms 00:28:46.967 [2024-11-26 19:13:17.925347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:17.942183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:46.967 [2024-11-26 19:13:17.942266] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:46.967 [2024-11-26 19:13:17.942289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:17.942302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:46.967 [2024-11-26 19:13:17.942317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.714 ms 00:28:46.967 [2024-11-26 19:13:17.942329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:17.973123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:17.973221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:46.967 [2024-11-26 19:13:17.973243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.701 ms 00:28:46.967 [2024-11-26 19:13:17.973255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:17.989974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:17.990038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:46.967 [2024-11-26 19:13:17.990057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.633 ms 00:28:46.967 [2024-11-26 19:13:17.990071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.006344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.006424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:46.967 [2024-11-26 19:13:18.006443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.187 ms 00:28:46.967 [2024-11-26 19:13:18.006455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.007373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.007410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:46.967 [2024-11-26 19:13:18.007430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:28:46.967 [2024-11-26 19:13:18.007441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.082893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.082984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:46.967 [2024-11-26 19:13:18.083019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.420 ms 00:28:46.967 [2024-11-26 19:13:18.083031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.096131] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:46.967 [2024-11-26 19:13:18.098866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.098916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:46.967 [2024-11-26 19:13:18.098936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.747 ms 00:28:46.967 [2024-11-26 19:13:18.098948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.099090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.099111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:46.967 [2024-11-26 19:13:18.099128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:46.967 [2024-11-26 19:13:18.099139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.100825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.100866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:46.967 [2024-11-26 19:13:18.100880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:28:46.967 [2024-11-26 19:13:18.100894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.100938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.100954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:46.967 [2024-11-26 19:13:18.100966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:46.967 [2024-11-26 19:13:18.100976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.967 [2024-11-26 19:13:18.101027] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:46.967 [2024-11-26 19:13:18.101044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.967 [2024-11-26 19:13:18.101055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:46.968 [2024-11-26 19:13:18.101067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:46.968 [2024-11-26 19:13:18.101077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.968 [2024-11-26 19:13:18.133265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.968 [2024-11-26 19:13:18.133343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:46.968 [2024-11-26 19:13:18.133375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.157 ms 00:28:46.968 [2024-11-26 19:13:18.133388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.968 [2024-11-26 19:13:18.133515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.968 [2024-11-26 19:13:18.133534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:46.968 [2024-11-26 19:13:18.133546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:46.968 [2024-11-26 19:13:18.133557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.968 [2024-11-26 19:13:18.137358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.819 ms, result 0 00:28:48.341  [2024-11-26T19:13:20.491Z] Copying: 736/1048576 [kB] (736 kBps) [2024-11-26T19:13:21.427Z] Copying: 3064/1048576 [kB] (2328 kBps) [2024-11-26T19:13:22.802Z] Copying: 15/1024 [MB] (12 MBps) [2024-11-26T19:13:23.369Z] Copying: 46/1024 [MB] (30 MBps) [2024-11-26T19:13:24.777Z] Copying: 72/1024 [MB] (26 MBps) [2024-11-26T19:13:25.711Z] Copying: 103/1024 [MB] (31 MBps) [2024-11-26T19:13:26.646Z] Copying: 128/1024 [MB] (24 MBps) [2024-11-26T19:13:27.577Z] Copying: 158/1024 [MB] (30 MBps) [2024-11-26T19:13:28.512Z] Copying: 189/1024 [MB] (31 MBps) [2024-11-26T19:13:29.444Z] Copying: 219/1024 [MB] (30 MBps) [2024-11-26T19:13:30.379Z] Copying: 249/1024 [MB] (29 MBps) [2024-11-26T19:13:31.754Z] Copying: 277/1024 [MB] (28 MBps) [2024-11-26T19:13:32.764Z] Copying: 308/1024 [MB] (31 MBps) [2024-11-26T19:13:33.700Z] Copying: 338/1024 [MB] (29 MBps) [2024-11-26T19:13:34.635Z] Copying: 368/1024 [MB] (29 MBps) [2024-11-26T19:13:35.571Z] Copying: 398/1024 [MB] (30 MBps) [2024-11-26T19:13:36.507Z] Copying: 429/1024 [MB] (30 MBps) [2024-11-26T19:13:37.442Z] Copying: 458/1024 [MB] (28 MBps) [2024-11-26T19:13:38.378Z] Copying: 489/1024 [MB] (30 MBps) [2024-11-26T19:13:39.763Z] Copying: 518/1024 [MB] (28 MBps) [2024-11-26T19:13:40.711Z] Copying: 547/1024 [MB] (29 MBps) [2024-11-26T19:13:41.643Z] Copying: 576/1024 [MB] (29 MBps) [2024-11-26T19:13:42.575Z] Copying: 605/1024 [MB] (29 MBps) [2024-11-26T19:13:43.509Z] Copying: 634/1024 [MB] (28 MBps) [2024-11-26T19:13:44.444Z] Copying: 665/1024 [MB] (31 MBps) [2024-11-26T19:13:45.379Z] Copying: 696/1024 [MB] (31 MBps) [2024-11-26T19:13:46.818Z] Copying: 726/1024 [MB] (29 MBps) [2024-11-26T19:13:47.399Z] Copying: 755/1024 [MB] (29 MBps) [2024-11-26T19:13:48.777Z] Copying: 785/1024 [MB] (29 MBps) [2024-11-26T19:13:49.712Z] Copying: 815/1024 [MB] (30 MBps) [2024-11-26T19:13:50.646Z] Copying: 845/1024 [MB] (29 MBps) [2024-11-26T19:13:51.579Z] Copying: 874/1024 [MB] (29 MBps) [2024-11-26T19:13:52.514Z] Copying: 901/1024 [MB] (26 MBps) [2024-11-26T19:13:53.448Z] Copying: 932/1024 [MB] (30 MBps) [2024-11-26T19:13:54.478Z] Copying: 962/1024 [MB] (30 MBps) [2024-11-26T19:13:55.413Z] Copying: 992/1024 [MB] (30 MBps) [2024-11-26T19:13:55.413Z] Copying: 1022/1024 [MB] (30 MBps) [2024-11-26T19:13:55.672Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 19:13:55.512875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.512974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:24.457 [2024-11-26 19:13:55.512999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:24.457 [2024-11-26 19:13:55.513017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.513089] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:24.457 [2024-11-26 19:13:55.517863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.517930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:24.457 [2024-11-26 19:13:55.517957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.727 ms 00:29:24.457 [2024-11-26 19:13:55.517971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.518633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.518696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:24.457 [2024-11-26 19:13:55.518714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:29:24.457 [2024-11-26 19:13:55.518727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.530165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.530294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:24.457 [2024-11-26 19:13:55.530319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.403 ms 00:29:24.457 [2024-11-26 19:13:55.530333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.538679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.538777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:24.457 [2024-11-26 19:13:55.538832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.287 ms 00:29:24.457 [2024-11-26 19:13:55.538848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.578884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.578987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:24.457 [2024-11-26 19:13:55.579011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.897 ms 00:29:24.457 [2024-11-26 19:13:55.579025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.601397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.601491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:24.457 [2024-11-26 19:13:55.601514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.269 ms 00:29:24.457 [2024-11-26 19:13:55.601528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.603311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.603364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:24.457 [2024-11-26 19:13:55.603382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:29:24.457 [2024-11-26 19:13:55.603423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.457 [2024-11-26 19:13:55.643828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.457 [2024-11-26 19:13:55.643962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:24.457 [2024-11-26 19:13:55.643991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.367 ms 00:29:24.457 [2024-11-26 19:13:55.644006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.716 [2024-11-26 19:13:55.684096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.716 [2024-11-26 19:13:55.684215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:24.716 [2024-11-26 19:13:55.684239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.991 ms 00:29:24.716 [2024-11-26 19:13:55.684254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.716 [2024-11-26 19:13:55.723760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.716 [2024-11-26 19:13:55.723856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:24.716 [2024-11-26 19:13:55.723900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.409 ms 00:29:24.716 [2024-11-26 19:13:55.723920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.716 [2024-11-26 19:13:55.763459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.716 [2024-11-26 19:13:55.763559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:24.716 [2024-11-26 19:13:55.763582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.344 ms 00:29:24.716 [2024-11-26 19:13:55.763596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.716 [2024-11-26 19:13:55.763683] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:24.716 [2024-11-26 19:13:55.763713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:24.716 [2024-11-26 19:13:55.763731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:24.716 [2024-11-26 19:13:55.763746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.763988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:24.716 [2024-11-26 19:13:55.764616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.764996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:24.717 [2024-11-26 19:13:55.765203] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:24.717 [2024-11-26 19:13:55.765219] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 023b4b4f-bcf0-4338-8c93-3af230e4a41f 00:29:24.717 [2024-11-26 19:13:55.765233] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:24.717 [2024-11-26 19:13:55.765246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 134592 00:29:24.717 [2024-11-26 19:13:55.765271] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 132608 00:29:24.717 [2024-11-26 19:13:55.765286] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0150 00:29:24.717 [2024-11-26 19:13:55.765298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:24.717 [2024-11-26 19:13:55.765334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:24.717 [2024-11-26 19:13:55.765347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:24.717 [2024-11-26 19:13:55.765359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:24.717 [2024-11-26 19:13:55.765371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:24.717 [2024-11-26 19:13:55.765385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.717 [2024-11-26 19:13:55.765399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:24.717 [2024-11-26 19:13:55.765412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:29:24.717 [2024-11-26 19:13:55.765426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.717 [2024-11-26 19:13:55.786349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.717 [2024-11-26 19:13:55.786436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:24.717 [2024-11-26 19:13:55.786459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.848 ms 00:29:24.717 [2024-11-26 19:13:55.786473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.717 [2024-11-26 19:13:55.787037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.717 [2024-11-26 19:13:55.787072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:24.717 [2024-11-26 19:13:55.787089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:29:24.717 [2024-11-26 19:13:55.787103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.717 [2024-11-26 19:13:55.841425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.717 [2024-11-26 19:13:55.841514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.717 [2024-11-26 19:13:55.841534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.717 [2024-11-26 19:13:55.841548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.717 [2024-11-26 19:13:55.841647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.717 [2024-11-26 19:13:55.841664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.717 [2024-11-26 19:13:55.841688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.717 [2024-11-26 19:13:55.841702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.717 [2024-11-26 19:13:55.841888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.717 [2024-11-26 19:13:55.841912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.717 [2024-11-26 19:13:55.841927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.717 [2024-11-26 19:13:55.841939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.717 [2024-11-26 19:13:55.841967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.717 [2024-11-26 19:13:55.841984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.717 [2024-11-26 19:13:55.841997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.717 [2024-11-26 19:13:55.842009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:55.969992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:55.970103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.975 [2024-11-26 19:13:55.970127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:55.970143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.060308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.060389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.975 [2024-11-26 19:13:56.060408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.060420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.060548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.060570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.975 [2024-11-26 19:13:56.060583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.060594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.060641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.060655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.975 [2024-11-26 19:13:56.060667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.060678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.060803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.060834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.975 [2024-11-26 19:13:56.060853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.060864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.060913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.060931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:24.975 [2024-11-26 19:13:56.060942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.060953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.060998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.061013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.975 [2024-11-26 19:13:56.061031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.061042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.061096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.975 [2024-11-26 19:13:56.061112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.975 [2024-11-26 19:13:56.061124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.975 [2024-11-26 19:13:56.061135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.975 [2024-11-26 19:13:56.061304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.417 ms, result 0 00:29:25.910 00:29:25.910 00:29:25.910 19:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:28.440 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:28.440 19:13:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:28.440 [2024-11-26 19:13:59.434919] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:29:28.440 [2024-11-26 19:13:59.435396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82964 ] 00:29:28.440 [2024-11-26 19:13:59.615675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.698 [2024-11-26 19:13:59.742858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.957 [2024-11-26 19:14:00.109447] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:28.957 [2024-11-26 19:14:00.109542] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:29.223 [2024-11-26 19:14:00.271850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.271949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:29.223 [2024-11-26 19:14:00.271970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:29.223 [2024-11-26 19:14:00.271983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.272072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.272095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.223 [2024-11-26 19:14:00.272109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:29.223 [2024-11-26 19:14:00.272120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.272158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:29.223 [2024-11-26 19:14:00.273248] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:29.223 [2024-11-26 19:14:00.273294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.273314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.223 [2024-11-26 19:14:00.273328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:29:29.223 [2024-11-26 19:14:00.273339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.274662] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:29.223 [2024-11-26 19:14:00.291545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.291661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:29.223 [2024-11-26 19:14:00.291687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.878 ms 00:29:29.223 [2024-11-26 19:14:00.291701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.291840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.291871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:29.223 [2024-11-26 19:14:00.291890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:29.223 [2024-11-26 19:14:00.291901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.296907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.296980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.223 [2024-11-26 19:14:00.296999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:29:29.223 [2024-11-26 19:14:00.297022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.297148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.297213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.223 [2024-11-26 19:14:00.297232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:29:29.223 [2024-11-26 19:14:00.297244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.297328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.297347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:29.223 [2024-11-26 19:14:00.297360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:29.223 [2024-11-26 19:14:00.297374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.297429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:29.223 [2024-11-26 19:14:00.301788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.301836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.223 [2024-11-26 19:14:00.301858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:29:29.223 [2024-11-26 19:14:00.301870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.301927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.301952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:29.223 [2024-11-26 19:14:00.301967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:29.223 [2024-11-26 19:14:00.301979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.302062] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:29.223 [2024-11-26 19:14:00.302097] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:29.223 [2024-11-26 19:14:00.302142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:29.223 [2024-11-26 19:14:00.302187] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:29.223 [2024-11-26 19:14:00.302307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:29.223 [2024-11-26 19:14:00.302323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:29.223 [2024-11-26 19:14:00.302337] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:29.223 [2024-11-26 19:14:00.302352] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:29.223 [2024-11-26 19:14:00.302366] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:29.223 [2024-11-26 19:14:00.302379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:29.223 [2024-11-26 19:14:00.302390] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:29.223 [2024-11-26 19:14:00.302406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:29.223 [2024-11-26 19:14:00.302417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:29.223 [2024-11-26 19:14:00.302429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.302441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:29.223 [2024-11-26 19:14:00.302453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:29:29.223 [2024-11-26 19:14:00.302464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.302574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.223 [2024-11-26 19:14:00.302593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:29.223 [2024-11-26 19:14:00.302605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:29.223 [2024-11-26 19:14:00.302617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.223 [2024-11-26 19:14:00.302754] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:29.223 [2024-11-26 19:14:00.302788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:29.223 [2024-11-26 19:14:00.302803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.224 [2024-11-26 19:14:00.302815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.302826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:29.224 [2024-11-26 19:14:00.302837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.302848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:29.224 [2024-11-26 19:14:00.302858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:29.224 [2024-11-26 19:14:00.302869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:29.224 [2024-11-26 19:14:00.302879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.224 [2024-11-26 19:14:00.302890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:29.224 [2024-11-26 19:14:00.302901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:29.224 [2024-11-26 19:14:00.302911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.224 [2024-11-26 19:14:00.302937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:29.224 [2024-11-26 19:14:00.302949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:29.224 [2024-11-26 19:14:00.302959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.302969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:29.224 [2024-11-26 19:14:00.302979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:29.224 [2024-11-26 19:14:00.302990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:29.224 [2024-11-26 19:14:00.303013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.224 [2024-11-26 19:14:00.303034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:29.224 [2024-11-26 19:14:00.303044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.224 [2024-11-26 19:14:00.303064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:29.224 [2024-11-26 19:14:00.303074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.224 [2024-11-26 19:14:00.303095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:29.224 [2024-11-26 19:14:00.303105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.224 [2024-11-26 19:14:00.303126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:29.224 [2024-11-26 19:14:00.303136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.224 [2024-11-26 19:14:00.303156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:29.224 [2024-11-26 19:14:00.303167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:29.224 [2024-11-26 19:14:00.303197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.224 [2024-11-26 19:14:00.303211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:29.224 [2024-11-26 19:14:00.303226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:29.224 [2024-11-26 19:14:00.303238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:29.224 [2024-11-26 19:14:00.303258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:29.224 [2024-11-26 19:14:00.303268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303278] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:29.224 [2024-11-26 19:14:00.303289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:29.224 [2024-11-26 19:14:00.303301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.224 [2024-11-26 19:14:00.303312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.224 [2024-11-26 19:14:00.303323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:29.224 [2024-11-26 19:14:00.303334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:29.224 [2024-11-26 19:14:00.303344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:29.224 [2024-11-26 19:14:00.303355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:29.224 [2024-11-26 19:14:00.303367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:29.224 [2024-11-26 19:14:00.303378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:29.224 [2024-11-26 19:14:00.303390] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:29.224 [2024-11-26 19:14:00.303404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:29.224 [2024-11-26 19:14:00.303435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:29.224 [2024-11-26 19:14:00.303446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:29.224 [2024-11-26 19:14:00.303457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:29.224 [2024-11-26 19:14:00.303469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:29.224 [2024-11-26 19:14:00.303480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:29.224 [2024-11-26 19:14:00.303491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:29.224 [2024-11-26 19:14:00.303502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:29.224 [2024-11-26 19:14:00.303513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:29.224 [2024-11-26 19:14:00.303524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:29.224 [2024-11-26 19:14:00.303580] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:29.224 [2024-11-26 19:14:00.303593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:29.224 [2024-11-26 19:14:00.303617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:29.224 [2024-11-26 19:14:00.303628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:29.224 [2024-11-26 19:14:00.303639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:29.224 [2024-11-26 19:14:00.303652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.224 [2024-11-26 19:14:00.303664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:29.224 [2024-11-26 19:14:00.303679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:29:29.224 [2024-11-26 19:14:00.303697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.224 [2024-11-26 19:14:00.338367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.338446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:29.225 [2024-11-26 19:14:00.338469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.598 ms 00:29:29.225 [2024-11-26 19:14:00.338490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.338609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.338626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:29.225 [2024-11-26 19:14:00.338644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:29.225 [2024-11-26 19:14:00.338664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.389268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.389340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:29.225 [2024-11-26 19:14:00.389368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.459 ms 00:29:29.225 [2024-11-26 19:14:00.389382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.389467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.389485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:29.225 [2024-11-26 19:14:00.389506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:29.225 [2024-11-26 19:14:00.389517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.389979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.390017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:29.225 [2024-11-26 19:14:00.390037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:29:29.225 [2024-11-26 19:14:00.390050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.390238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.390271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:29.225 [2024-11-26 19:14:00.390301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:29:29.225 [2024-11-26 19:14:00.390320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.407274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.407348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:29.225 [2024-11-26 19:14:00.407369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.917 ms 00:29:29.225 [2024-11-26 19:14:00.407381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.225 [2024-11-26 19:14:00.424494] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:29.225 [2024-11-26 19:14:00.424574] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:29.225 [2024-11-26 19:14:00.424597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.225 [2024-11-26 19:14:00.424610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:29.225 [2024-11-26 19:14:00.424625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.025 ms 00:29:29.225 [2024-11-26 19:14:00.424637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.455830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.455934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:29.487 [2024-11-26 19:14:00.455954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.895 ms 00:29:29.487 [2024-11-26 19:14:00.455968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.473187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.473274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:29.487 [2024-11-26 19:14:00.473294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.951 ms 00:29:29.487 [2024-11-26 19:14:00.473306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.490086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.490211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:29.487 [2024-11-26 19:14:00.490234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.684 ms 00:29:29.487 [2024-11-26 19:14:00.490246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.491207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.491244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:29.487 [2024-11-26 19:14:00.491265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:29:29.487 [2024-11-26 19:14:00.491277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.568763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.568855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:29.487 [2024-11-26 19:14:00.568892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.453 ms 00:29:29.487 [2024-11-26 19:14:00.568905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.582008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:29.487 [2024-11-26 19:14:00.584863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.584910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:29.487 [2024-11-26 19:14:00.584929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.868 ms 00:29:29.487 [2024-11-26 19:14:00.584941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.585084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.585115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:29.487 [2024-11-26 19:14:00.585136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:29.487 [2024-11-26 19:14:00.585147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.585819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.585852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:29.487 [2024-11-26 19:14:00.585866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:29:29.487 [2024-11-26 19:14:00.585878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.585915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.585931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:29.487 [2024-11-26 19:14:00.585944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:29.487 [2024-11-26 19:14:00.585955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.586005] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:29.487 [2024-11-26 19:14:00.586022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.586033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:29.487 [2024-11-26 19:14:00.586046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:29.487 [2024-11-26 19:14:00.586057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.618655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.618735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:29.487 [2024-11-26 19:14:00.618767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.567 ms 00:29:29.487 [2024-11-26 19:14:00.618781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.618957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.487 [2024-11-26 19:14:00.618991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:29.487 [2024-11-26 19:14:00.619007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:29.487 [2024-11-26 19:14:00.619018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.487 [2024-11-26 19:14:00.620373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.954 ms, result 0 00:29:30.955  [2024-11-26T19:14:03.105Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T19:14:04.041Z] Copying: 55/1024 [MB] (28 MBps) [2024-11-26T19:14:04.975Z] Copying: 81/1024 [MB] (25 MBps) [2024-11-26T19:14:05.912Z] Copying: 110/1024 [MB] (28 MBps) [2024-11-26T19:14:06.848Z] Copying: 136/1024 [MB] (25 MBps) [2024-11-26T19:14:08.224Z] Copying: 165/1024 [MB] (28 MBps) [2024-11-26T19:14:09.157Z] Copying: 191/1024 [MB] (25 MBps) [2024-11-26T19:14:10.094Z] Copying: 218/1024 [MB] (27 MBps) [2024-11-26T19:14:11.028Z] Copying: 247/1024 [MB] (28 MBps) [2024-11-26T19:14:11.965Z] Copying: 273/1024 [MB] (26 MBps) [2024-11-26T19:14:12.899Z] Copying: 302/1024 [MB] (29 MBps) [2024-11-26T19:14:14.275Z] Copying: 330/1024 [MB] (28 MBps) [2024-11-26T19:14:14.842Z] Copying: 358/1024 [MB] (28 MBps) [2024-11-26T19:14:16.271Z] Copying: 385/1024 [MB] (27 MBps) [2024-11-26T19:14:17.205Z] Copying: 411/1024 [MB] (25 MBps) [2024-11-26T19:14:18.138Z] Copying: 438/1024 [MB] (27 MBps) [2024-11-26T19:14:19.073Z] Copying: 463/1024 [MB] (25 MBps) [2024-11-26T19:14:20.005Z] Copying: 492/1024 [MB] (28 MBps) [2024-11-26T19:14:20.939Z] Copying: 521/1024 [MB] (28 MBps) [2024-11-26T19:14:21.889Z] Copying: 550/1024 [MB] (28 MBps) [2024-11-26T19:14:23.262Z] Copying: 574/1024 [MB] (24 MBps) [2024-11-26T19:14:24.196Z] Copying: 600/1024 [MB] (26 MBps) [2024-11-26T19:14:25.131Z] Copying: 629/1024 [MB] (29 MBps) [2024-11-26T19:14:26.069Z] Copying: 657/1024 [MB] (27 MBps) [2024-11-26T19:14:27.003Z] Copying: 685/1024 [MB] (28 MBps) [2024-11-26T19:14:27.937Z] Copying: 713/1024 [MB] (27 MBps) [2024-11-26T19:14:28.872Z] Copying: 742/1024 [MB] (29 MBps) [2024-11-26T19:14:30.245Z] Copying: 771/1024 [MB] (28 MBps) [2024-11-26T19:14:31.180Z] Copying: 800/1024 [MB] (28 MBps) [2024-11-26T19:14:32.115Z] Copying: 828/1024 [MB] (28 MBps) [2024-11-26T19:14:33.051Z] Copying: 855/1024 [MB] (26 MBps) [2024-11-26T19:14:33.985Z] Copying: 882/1024 [MB] (27 MBps) [2024-11-26T19:14:34.920Z] Copying: 909/1024 [MB] (26 MBps) [2024-11-26T19:14:35.854Z] Copying: 936/1024 [MB] (27 MBps) [2024-11-26T19:14:37.226Z] Copying: 965/1024 [MB] (28 MBps) [2024-11-26T19:14:38.160Z] Copying: 993/1024 [MB] (27 MBps) [2024-11-26T19:14:38.160Z] Copying: 1018/1024 [MB] (25 MBps) [2024-11-26T19:14:38.160Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 19:14:38.120394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.946 [2024-11-26 19:14:38.120477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:06.946 [2024-11-26 19:14:38.120498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:06.946 [2024-11-26 19:14:38.120511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.946 [2024-11-26 19:14:38.120544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:06.946 [2024-11-26 19:14:38.124606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.946 [2024-11-26 19:14:38.124680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:06.946 [2024-11-26 19:14:38.124699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:30:06.946 [2024-11-26 19:14:38.124712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.946 [2024-11-26 19:14:38.124993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.946 [2024-11-26 19:14:38.125022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:06.946 [2024-11-26 19:14:38.125036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:30:06.946 [2024-11-26 19:14:38.125054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.946 [2024-11-26 19:14:38.128604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.946 [2024-11-26 19:14:38.128647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:06.946 [2024-11-26 19:14:38.128661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:30:06.946 [2024-11-26 19:14:38.128682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.946 [2024-11-26 19:14:38.135491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.946 [2024-11-26 19:14:38.135559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:06.946 [2024-11-26 19:14:38.135576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.776 ms 00:30:06.946 [2024-11-26 19:14:38.135588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.205 [2024-11-26 19:14:38.169398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.169490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:07.206 [2024-11-26 19:14:38.169511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.690 ms 00:30:07.206 [2024-11-26 19:14:38.169524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.188243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.188333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:07.206 [2024-11-26 19:14:38.188353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.627 ms 00:30:07.206 [2024-11-26 19:14:38.188365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.189840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.189885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:07.206 [2024-11-26 19:14:38.189903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:30:07.206 [2024-11-26 19:14:38.189915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.223068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.223154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:07.206 [2024-11-26 19:14:38.223185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.122 ms 00:30:07.206 [2024-11-26 19:14:38.223201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.256380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.256464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:07.206 [2024-11-26 19:14:38.256485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.086 ms 00:30:07.206 [2024-11-26 19:14:38.256497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.289828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.289926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:07.206 [2024-11-26 19:14:38.289946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.219 ms 00:30:07.206 [2024-11-26 19:14:38.289958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.323053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.206 [2024-11-26 19:14:38.323140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:07.206 [2024-11-26 19:14:38.323159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.919 ms 00:30:07.206 [2024-11-26 19:14:38.323183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.206 [2024-11-26 19:14:38.323270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:07.206 [2024-11-26 19:14:38.323310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:07.206 [2024-11-26 19:14:38.323330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:07.206 [2024-11-26 19:14:38.323343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.323994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:07.206 [2024-11-26 19:14:38.324111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:07.207 [2024-11-26 19:14:38.324575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:07.207 [2024-11-26 19:14:38.324587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 023b4b4f-bcf0-4338-8c93-3af230e4a41f 00:30:07.207 [2024-11-26 19:14:38.324599] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:07.207 [2024-11-26 19:14:38.324610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:07.207 [2024-11-26 19:14:38.324621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:07.207 [2024-11-26 19:14:38.324633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:07.207 [2024-11-26 19:14:38.324661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:07.207 [2024-11-26 19:14:38.324682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:07.207 [2024-11-26 19:14:38.324692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:07.207 [2024-11-26 19:14:38.324702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:07.207 [2024-11-26 19:14:38.324712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:07.207 [2024-11-26 19:14:38.324724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.207 [2024-11-26 19:14:38.324735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:07.207 [2024-11-26 19:14:38.324748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:30:07.207 [2024-11-26 19:14:38.324764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.207 [2024-11-26 19:14:38.341901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.207 [2024-11-26 19:14:38.341976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:07.207 [2024-11-26 19:14:38.341996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.061 ms 00:30:07.207 [2024-11-26 19:14:38.342007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.207 [2024-11-26 19:14:38.342562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.207 [2024-11-26 19:14:38.342608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:07.207 [2024-11-26 19:14:38.342625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:30:07.207 [2024-11-26 19:14:38.342636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.207 [2024-11-26 19:14:38.386562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.207 [2024-11-26 19:14:38.386643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:07.207 [2024-11-26 19:14:38.386664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.207 [2024-11-26 19:14:38.386676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.207 [2024-11-26 19:14:38.386781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.207 [2024-11-26 19:14:38.386812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:07.207 [2024-11-26 19:14:38.386825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.207 [2024-11-26 19:14:38.386837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.207 [2024-11-26 19:14:38.386962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.207 [2024-11-26 19:14:38.386982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:07.207 [2024-11-26 19:14:38.386995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.207 [2024-11-26 19:14:38.387006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.207 [2024-11-26 19:14:38.387029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.207 [2024-11-26 19:14:38.387049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:07.207 [2024-11-26 19:14:38.387069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.207 [2024-11-26 19:14:38.387080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.492440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.492519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:07.466 [2024-11-26 19:14:38.492540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.492552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.579538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.579632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:07.466 [2024-11-26 19:14:38.579651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.579663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.579787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.579807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:07.466 [2024-11-26 19:14:38.579820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.579831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.579884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.579913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:07.466 [2024-11-26 19:14:38.579927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.579943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.580146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.580199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:07.466 [2024-11-26 19:14:38.580215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.580227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.580289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.580307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:07.466 [2024-11-26 19:14:38.580320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.580331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.580383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.580400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:07.466 [2024-11-26 19:14:38.580412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.580423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.580489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.466 [2024-11-26 19:14:38.580507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:07.466 [2024-11-26 19:14:38.580519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.466 [2024-11-26 19:14:38.580537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.466 [2024-11-26 19:14:38.580714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.257 ms, result 0 00:30:08.447 00:30:08.447 00:30:08.447 19:14:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:10.981 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:10.981 19:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:10.981 19:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:10.981 19:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:10.981 19:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:10.981 19:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:10.981 Process with pid 81197 is not found 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81197 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81197 ']' 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81197 00:30:10.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81197) - No such process 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81197 is not found' 00:30:10.981 19:14:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:11.240 Remove shared memory files 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:11.240 00:30:11.240 real 3m39.115s 00:30:11.240 user 4m8.935s 00:30:11.240 sys 0m39.100s 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.240 19:14:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.240 ************************************ 00:30:11.240 END TEST ftl_dirty_shutdown 00:30:11.240 ************************************ 00:30:11.240 19:14:42 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:11.240 19:14:42 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:11.240 19:14:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.240 19:14:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:11.240 ************************************ 00:30:11.240 START TEST ftl_upgrade_shutdown 00:30:11.240 ************************************ 00:30:11.240 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:11.500 * Looking for test storage... 00:30:11.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:11.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.500 --rc genhtml_branch_coverage=1 00:30:11.500 --rc genhtml_function_coverage=1 00:30:11.500 --rc genhtml_legend=1 00:30:11.500 --rc geninfo_all_blocks=1 00:30:11.500 --rc geninfo_unexecuted_blocks=1 00:30:11.500 00:30:11.500 ' 00:30:11.500 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:11.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.501 --rc genhtml_branch_coverage=1 00:30:11.501 --rc genhtml_function_coverage=1 00:30:11.501 --rc genhtml_legend=1 00:30:11.501 --rc geninfo_all_blocks=1 00:30:11.501 --rc geninfo_unexecuted_blocks=1 00:30:11.501 00:30:11.501 ' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:11.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.501 --rc genhtml_branch_coverage=1 00:30:11.501 --rc genhtml_function_coverage=1 00:30:11.501 --rc genhtml_legend=1 00:30:11.501 --rc geninfo_all_blocks=1 00:30:11.501 --rc geninfo_unexecuted_blocks=1 00:30:11.501 00:30:11.501 ' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:11.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.501 --rc genhtml_branch_coverage=1 00:30:11.501 --rc genhtml_function_coverage=1 00:30:11.501 --rc genhtml_legend=1 00:30:11.501 --rc geninfo_all_blocks=1 00:30:11.501 --rc geninfo_unexecuted_blocks=1 00:30:11.501 00:30:11.501 ' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83445 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83445 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83445 ']' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.501 19:14:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.760 [2024-11-26 19:14:42.806375] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:30:11.760 [2024-11-26 19:14:42.806550] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83445 ] 00:30:12.018 [2024-11-26 19:14:42.995543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.018 [2024-11-26 19:14:43.123608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:12.955 19:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:13.214 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:13.473 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:13.473 { 00:30:13.473 "name": "basen1", 00:30:13.473 "aliases": [ 00:30:13.473 "63943742-e430-45e3-bd06-9a28ce0e2f01" 00:30:13.473 ], 00:30:13.473 "product_name": "NVMe disk", 00:30:13.473 "block_size": 4096, 00:30:13.473 "num_blocks": 1310720, 00:30:13.473 "uuid": "63943742-e430-45e3-bd06-9a28ce0e2f01", 00:30:13.473 "numa_id": -1, 00:30:13.473 "assigned_rate_limits": { 00:30:13.473 "rw_ios_per_sec": 0, 00:30:13.473 "rw_mbytes_per_sec": 0, 00:30:13.473 "r_mbytes_per_sec": 0, 00:30:13.473 "w_mbytes_per_sec": 0 00:30:13.473 }, 00:30:13.473 "claimed": true, 00:30:13.473 "claim_type": "read_many_write_one", 00:30:13.473 "zoned": false, 00:30:13.473 "supported_io_types": { 00:30:13.473 "read": true, 00:30:13.473 "write": true, 00:30:13.473 "unmap": true, 00:30:13.473 "flush": true, 00:30:13.473 "reset": true, 00:30:13.473 "nvme_admin": true, 00:30:13.473 "nvme_io": true, 00:30:13.473 "nvme_io_md": false, 00:30:13.473 "write_zeroes": true, 00:30:13.473 "zcopy": false, 00:30:13.473 "get_zone_info": false, 00:30:13.473 "zone_management": false, 00:30:13.473 "zone_append": false, 00:30:13.473 "compare": true, 00:30:13.473 "compare_and_write": false, 00:30:13.473 "abort": true, 00:30:13.473 "seek_hole": false, 00:30:13.473 "seek_data": false, 00:30:13.473 "copy": true, 00:30:13.473 "nvme_iov_md": false 00:30:13.473 }, 00:30:13.473 "driver_specific": { 00:30:13.473 "nvme": [ 00:30:13.473 { 00:30:13.473 "pci_address": "0000:00:11.0", 00:30:13.473 "trid": { 00:30:13.473 "trtype": "PCIe", 00:30:13.473 "traddr": "0000:00:11.0" 00:30:13.473 }, 00:30:13.473 "ctrlr_data": { 00:30:13.473 "cntlid": 0, 00:30:13.473 "vendor_id": "0x1b36", 00:30:13.473 "model_number": "QEMU NVMe Ctrl", 00:30:13.473 "serial_number": "12341", 00:30:13.473 "firmware_revision": "8.0.0", 00:30:13.473 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:13.473 "oacs": { 00:30:13.473 "security": 0, 00:30:13.473 "format": 1, 00:30:13.473 "firmware": 0, 00:30:13.473 "ns_manage": 1 00:30:13.473 }, 00:30:13.473 "multi_ctrlr": false, 00:30:13.473 "ana_reporting": false 00:30:13.473 }, 00:30:13.473 "vs": { 00:30:13.473 "nvme_version": "1.4" 00:30:13.473 }, 00:30:13.473 "ns_data": { 00:30:13.473 "id": 1, 00:30:13.473 "can_share": false 00:30:13.473 } 00:30:13.473 } 00:30:13.473 ], 00:30:13.473 "mp_policy": "active_passive" 00:30:13.473 } 00:30:13.473 } 00:30:13.473 ]' 00:30:13.473 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:13.731 19:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:13.989 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a82b8d63-0b8a-4466-908b-59f9a65880e3 00:30:13.989 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:13.989 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a82b8d63-0b8a-4466-908b-59f9a65880e3 00:30:14.247 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:14.506 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=85098767-acf8-41ea-baec-34cf03871f51 00:30:14.506 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 85098767-acf8-41ea-baec-34cf03871f51 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6dadf4db-3eed-4d51-b859-155e69013d0e 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6dadf4db-3eed-4d51-b859-155e69013d0e ]] 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6dadf4db-3eed-4d51-b859-155e69013d0e 5120 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6dadf4db-3eed-4d51-b859-155e69013d0e 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6dadf4db-3eed-4d51-b859-155e69013d0e 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6dadf4db-3eed-4d51-b859-155e69013d0e 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:14.765 19:14:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6dadf4db-3eed-4d51-b859-155e69013d0e 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:15.337 { 00:30:15.337 "name": "6dadf4db-3eed-4d51-b859-155e69013d0e", 00:30:15.337 "aliases": [ 00:30:15.337 "lvs/basen1p0" 00:30:15.337 ], 00:30:15.337 "product_name": "Logical Volume", 00:30:15.337 "block_size": 4096, 00:30:15.337 "num_blocks": 5242880, 00:30:15.337 "uuid": "6dadf4db-3eed-4d51-b859-155e69013d0e", 00:30:15.337 "assigned_rate_limits": { 00:30:15.337 "rw_ios_per_sec": 0, 00:30:15.337 "rw_mbytes_per_sec": 0, 00:30:15.337 "r_mbytes_per_sec": 0, 00:30:15.337 "w_mbytes_per_sec": 0 00:30:15.337 }, 00:30:15.337 "claimed": false, 00:30:15.337 "zoned": false, 00:30:15.337 "supported_io_types": { 00:30:15.337 "read": true, 00:30:15.337 "write": true, 00:30:15.337 "unmap": true, 00:30:15.337 "flush": false, 00:30:15.337 "reset": true, 00:30:15.337 "nvme_admin": false, 00:30:15.337 "nvme_io": false, 00:30:15.337 "nvme_io_md": false, 00:30:15.337 "write_zeroes": true, 00:30:15.337 "zcopy": false, 00:30:15.337 "get_zone_info": false, 00:30:15.337 "zone_management": false, 00:30:15.337 "zone_append": false, 00:30:15.337 "compare": false, 00:30:15.337 "compare_and_write": false, 00:30:15.337 "abort": false, 00:30:15.337 "seek_hole": true, 00:30:15.337 "seek_data": true, 00:30:15.337 "copy": false, 00:30:15.337 "nvme_iov_md": false 00:30:15.337 }, 00:30:15.337 "driver_specific": { 00:30:15.337 "lvol": { 00:30:15.337 "lvol_store_uuid": "85098767-acf8-41ea-baec-34cf03871f51", 00:30:15.337 "base_bdev": "basen1", 00:30:15.337 "thin_provision": true, 00:30:15.337 "num_allocated_clusters": 0, 00:30:15.337 "snapshot": false, 00:30:15.337 "clone": false, 00:30:15.337 "esnap_clone": false 00:30:15.337 } 00:30:15.337 } 00:30:15.337 } 00:30:15.337 ]' 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:15.337 19:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:15.904 19:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:15.904 19:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:15.904 19:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:15.904 19:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:15.904 19:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:15.904 19:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6dadf4db-3eed-4d51-b859-155e69013d0e -c cachen1p0 --l2p_dram_limit 2 00:30:16.471 [2024-11-26 19:14:47.430515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.430587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:16.471 [2024-11-26 19:14:47.430612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:16.471 [2024-11-26 19:14:47.430626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.430718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.430737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:16.471 [2024-11-26 19:14:47.430754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:16.471 [2024-11-26 19:14:47.430766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.430800] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:16.471 [2024-11-26 19:14:47.431816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:16.471 [2024-11-26 19:14:47.431875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.431914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:16.471 [2024-11-26 19:14:47.431940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:30:16.471 [2024-11-26 19:14:47.431954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.432135] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d9f2c854-23dc-4e09-ae2b-41d664d21ee8 00:30:16.471 [2024-11-26 19:14:47.433242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.433287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:16.471 [2024-11-26 19:14:47.433304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:16.471 [2024-11-26 19:14:47.433318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.438095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.438187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:16.471 [2024-11-26 19:14:47.438208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.713 ms 00:30:16.471 [2024-11-26 19:14:47.438223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.438310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.438333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:16.471 [2024-11-26 19:14:47.438348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:16.471 [2024-11-26 19:14:47.438364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.438460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.438483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:16.471 [2024-11-26 19:14:47.438500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:16.471 [2024-11-26 19:14:47.438519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.438564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:16.471 [2024-11-26 19:14:47.443221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.443273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:16.471 [2024-11-26 19:14:47.443295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.661 ms 00:30:16.471 [2024-11-26 19:14:47.443308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.443362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.443378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:16.471 [2024-11-26 19:14:47.443393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:16.471 [2024-11-26 19:14:47.443405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.443468] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:16.471 [2024-11-26 19:14:47.443633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:16.471 [2024-11-26 19:14:47.443667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:16.471 [2024-11-26 19:14:47.443685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:16.471 [2024-11-26 19:14:47.443702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:16.471 [2024-11-26 19:14:47.443717] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:16.471 [2024-11-26 19:14:47.443732] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:16.471 [2024-11-26 19:14:47.443746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:16.471 [2024-11-26 19:14:47.443759] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:16.471 [2024-11-26 19:14:47.443771] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:16.471 [2024-11-26 19:14:47.443786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.443798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:16.471 [2024-11-26 19:14:47.443813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:30:16.471 [2024-11-26 19:14:47.443825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.443963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.471 [2024-11-26 19:14:47.443998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:16.471 [2024-11-26 19:14:47.444016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:30:16.471 [2024-11-26 19:14:47.444029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.471 [2024-11-26 19:14:47.444157] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:16.471 [2024-11-26 19:14:47.444201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:16.471 [2024-11-26 19:14:47.444219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.471 [2024-11-26 19:14:47.444232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.471 [2024-11-26 19:14:47.444247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:16.472 [2024-11-26 19:14:47.444259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:16.472 [2024-11-26 19:14:47.444285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:16.472 [2024-11-26 19:14:47.444299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:16.472 [2024-11-26 19:14:47.444310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:16.472 [2024-11-26 19:14:47.444335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:16.472 [2024-11-26 19:14:47.444348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:16.472 [2024-11-26 19:14:47.444373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:16.472 [2024-11-26 19:14:47.444384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:16.472 [2024-11-26 19:14:47.444414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:16.472 [2024-11-26 19:14:47.444437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:16.472 [2024-11-26 19:14:47.444462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:16.472 [2024-11-26 19:14:47.444473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:16.472 [2024-11-26 19:14:47.444498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:16.472 [2024-11-26 19:14:47.444512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:16.472 [2024-11-26 19:14:47.444537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:16.472 [2024-11-26 19:14:47.444548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:16.472 [2024-11-26 19:14:47.444573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:16.472 [2024-11-26 19:14:47.444586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:16.472 [2024-11-26 19:14:47.444616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:16.472 [2024-11-26 19:14:47.444627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:16.472 [2024-11-26 19:14:47.444652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:16.472 [2024-11-26 19:14:47.444691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:16.472 [2024-11-26 19:14:47.444727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:16.472 [2024-11-26 19:14:47.444740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444752] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:16.472 [2024-11-26 19:14:47.444768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:16.472 [2024-11-26 19:14:47.444780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.472 [2024-11-26 19:14:47.444807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:16.472 [2024-11-26 19:14:47.444823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:16.472 [2024-11-26 19:14:47.444834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:16.472 [2024-11-26 19:14:47.444848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:16.472 [2024-11-26 19:14:47.444860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:16.472 [2024-11-26 19:14:47.444874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:16.472 [2024-11-26 19:14:47.444891] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:16.472 [2024-11-26 19:14:47.444911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.444925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:16.472 [2024-11-26 19:14:47.444939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.444951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.444966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:16.472 [2024-11-26 19:14:47.444978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:16.472 [2024-11-26 19:14:47.444992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:16.472 [2024-11-26 19:14:47.445004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:16.472 [2024-11-26 19:14:47.445018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:16.472 [2024-11-26 19:14:47.445113] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:16.472 [2024-11-26 19:14:47.445129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.472 [2024-11-26 19:14:47.445156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:16.472 [2024-11-26 19:14:47.445186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:16.472 [2024-11-26 19:14:47.445204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:16.472 [2024-11-26 19:14:47.445218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.472 [2024-11-26 19:14:47.445231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:16.472 [2024-11-26 19:14:47.445245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.136 ms 00:30:16.473 [2024-11-26 19:14:47.445259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.473 [2024-11-26 19:14:47.445316] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:16.473 [2024-11-26 19:14:47.445344] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:18.392 [2024-11-26 19:14:49.380999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.392 [2024-11-26 19:14:49.381111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:18.392 [2024-11-26 19:14:49.381136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1935.690 ms 00:30:18.392 [2024-11-26 19:14:49.381152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.392 [2024-11-26 19:14:49.414458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.392 [2024-11-26 19:14:49.414541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:18.392 [2024-11-26 19:14:49.414563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.998 ms 00:30:18.392 [2024-11-26 19:14:49.414579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.392 [2024-11-26 19:14:49.414723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.392 [2024-11-26 19:14:49.414748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:18.392 [2024-11-26 19:14:49.414763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:18.392 [2024-11-26 19:14:49.414783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.392 [2024-11-26 19:14:49.456416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.392 [2024-11-26 19:14:49.456493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:18.392 [2024-11-26 19:14:49.456515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.572 ms 00:30:18.392 [2024-11-26 19:14:49.456530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.392 [2024-11-26 19:14:49.456605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.392 [2024-11-26 19:14:49.456625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:18.392 [2024-11-26 19:14:49.456638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:18.392 [2024-11-26 19:14:49.456652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.457082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.457117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:18.393 [2024-11-26 19:14:49.457146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:30:18.393 [2024-11-26 19:14:49.457162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.457242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.457265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:18.393 [2024-11-26 19:14:49.457279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:18.393 [2024-11-26 19:14:49.457295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.475423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.475504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:18.393 [2024-11-26 19:14:49.475526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.096 ms 00:30:18.393 [2024-11-26 19:14:49.475542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.503495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:18.393 [2024-11-26 19:14:49.504512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.504552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:18.393 [2024-11-26 19:14:49.504575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.810 ms 00:30:18.393 [2024-11-26 19:14:49.504588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.530669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.530760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:18.393 [2024-11-26 19:14:49.530785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.986 ms 00:30:18.393 [2024-11-26 19:14:49.530798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.530987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.531011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:18.393 [2024-11-26 19:14:49.531032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:30:18.393 [2024-11-26 19:14:49.531044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.564007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.564098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:18.393 [2024-11-26 19:14:49.564125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.838 ms 00:30:18.393 [2024-11-26 19:14:49.564138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.597300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.597388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:18.393 [2024-11-26 19:14:49.597412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.025 ms 00:30:18.393 [2024-11-26 19:14:49.597425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.393 [2024-11-26 19:14:49.598241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.393 [2024-11-26 19:14:49.598276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:18.393 [2024-11-26 19:14:49.598298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:30:18.393 [2024-11-26 19:14:49.598311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.686895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.651 [2024-11-26 19:14:49.686987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:18.651 [2024-11-26 19:14:49.687016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.461 ms 00:30:18.651 [2024-11-26 19:14:49.687031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.721814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.651 [2024-11-26 19:14:49.721907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:18.651 [2024-11-26 19:14:49.721934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.593 ms 00:30:18.651 [2024-11-26 19:14:49.721947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.758268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.651 [2024-11-26 19:14:49.758353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:18.651 [2024-11-26 19:14:49.758378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.228 ms 00:30:18.651 [2024-11-26 19:14:49.758391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.791416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.651 [2024-11-26 19:14:49.791519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:18.651 [2024-11-26 19:14:49.791547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.940 ms 00:30:18.651 [2024-11-26 19:14:49.791560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.791647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.651 [2024-11-26 19:14:49.791679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:18.651 [2024-11-26 19:14:49.791711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:18.651 [2024-11-26 19:14:49.791724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.791888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:18.651 [2024-11-26 19:14:49.791930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:18.651 [2024-11-26 19:14:49.791948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:18.651 [2024-11-26 19:14:49.791960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:18.651 [2024-11-26 19:14:49.793063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2362.047 ms, result 0 00:30:18.651 { 00:30:18.651 "name": "ftl", 00:30:18.651 "uuid": "d9f2c854-23dc-4e09-ae2b-41d664d21ee8" 00:30:18.651 } 00:30:18.651 19:14:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:19.219 [2024-11-26 19:14:50.164457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.219 19:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:19.478 19:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:19.736 [2024-11-26 19:14:50.777198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:19.736 19:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:19.995 [2024-11-26 19:14:51.054929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:19.995 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:20.563 Fill FTL, iteration 1 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83568 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83568 /var/tmp/spdk.tgt.sock 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83568 ']' 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.563 19:14:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:20.563 [2024-11-26 19:14:51.591799] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:30:20.563 [2024-11-26 19:14:51.592459] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83568 ] 00:30:20.563 [2024-11-26 19:14:51.766564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.821 [2024-11-26 19:14:51.910006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.783 19:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.783 19:14:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:21.783 19:14:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:22.046 ftln1 00:30:22.046 19:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:22.046 19:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83568 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83568 ']' 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83568 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83568 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.304 killing process with pid 83568 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83568' 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83568 00:30:22.304 19:14:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83568 00:30:24.836 19:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:24.836 19:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:24.836 [2024-11-26 19:14:55.721994] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:30:24.836 [2024-11-26 19:14:55.722146] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83625 ] 00:30:24.836 [2024-11-26 19:14:55.896012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.836 [2024-11-26 19:14:56.002077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.211  [2024-11-26T19:14:58.801Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-26T19:14:59.734Z] Copying: 389/1024 [MB] (203 MBps) [2024-11-26T19:15:00.667Z] Copying: 593/1024 [MB] (204 MBps) [2024-11-26T19:15:01.600Z] Copying: 793/1024 [MB] (200 MBps) [2024-11-26T19:15:01.600Z] Copying: 1003/1024 [MB] (210 MBps) [2024-11-26T19:15:02.975Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:30:31.760 00:30:31.760 Calculate MD5 checksum, iteration 1 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:31.760 19:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:31.760 [2024-11-26 19:15:02.657391] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:30:31.760 [2024-11-26 19:15:02.657552] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83695 ] 00:30:31.760 [2024-11-26 19:15:02.831009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.760 [2024-11-26 19:15:02.935084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.662  [2024-11-26T19:15:05.444Z] Copying: 470/1024 [MB] (470 MBps) [2024-11-26T19:15:05.702Z] Copying: 924/1024 [MB] (454 MBps) [2024-11-26T19:15:06.695Z] Copying: 1024/1024 [MB] (average 459 MBps) 00:30:35.480 00:30:35.480 19:15:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:35.480 19:15:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:38.027 Fill FTL, iteration 2 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=521be64f67e94be588593da7abfa51cb 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:38.027 19:15:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:38.027 [2024-11-26 19:15:08.788238] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:30:38.027 [2024-11-26 19:15:08.788439] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83758 ] 00:30:38.027 [2024-11-26 19:15:08.979728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.027 [2024-11-26 19:15:09.108598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.402  [2024-11-26T19:15:11.992Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-26T19:15:12.558Z] Copying: 394/1024 [MB] (195 MBps) [2024-11-26T19:15:13.931Z] Copying: 598/1024 [MB] (204 MBps) [2024-11-26T19:15:14.866Z] Copying: 799/1024 [MB] (201 MBps) [2024-11-26T19:15:14.866Z] Copying: 999/1024 [MB] (200 MBps) [2024-11-26T19:15:15.801Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:30:44.586 00:30:44.586 Calculate MD5 checksum, iteration 2 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.586 19:15:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:44.844 [2024-11-26 19:15:15.817874] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:30:44.844 [2024-11-26 19:15:15.818035] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83822 ] 00:30:44.844 [2024-11-26 19:15:15.993346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.101 [2024-11-26 19:15:16.097709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.034  [2024-11-26T19:15:18.838Z] Copying: 465/1024 [MB] (465 MBps) [2024-11-26T19:15:19.096Z] Copying: 897/1024 [MB] (432 MBps) [2024-11-26T19:15:20.470Z] Copying: 1024/1024 [MB] (average 441 MBps) 00:30:49.255 00:30:49.255 19:15:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:49.255 19:15:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:51.786 19:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:51.786 19:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=219bb608bbee4c9e88cfe815e53572b9 00:30:51.786 19:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:51.786 19:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:51.786 19:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:51.786 [2024-11-26 19:15:22.757983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.786 [2024-11-26 19:15:22.758061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:51.786 [2024-11-26 19:15:22.758087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:51.786 [2024-11-26 19:15:22.758101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.786 [2024-11-26 19:15:22.758147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.786 [2024-11-26 19:15:22.758191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:51.786 [2024-11-26 19:15:22.758207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:51.786 [2024-11-26 19:15:22.758220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.786 [2024-11-26 19:15:22.758252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:51.786 [2024-11-26 19:15:22.758276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:51.786 [2024-11-26 19:15:22.758289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:51.786 [2024-11-26 19:15:22.758300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:51.786 [2024-11-26 19:15:22.758387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.427 ms, result 0 00:30:51.786 true 00:30:51.786 19:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:52.045 { 00:30:52.045 "name": "ftl", 00:30:52.045 "properties": [ 00:30:52.045 { 00:30:52.045 "name": "superblock_version", 00:30:52.045 "value": 5, 00:30:52.045 "read-only": true 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": "base_device", 00:30:52.045 "bands": [ 00:30:52.045 { 00:30:52.045 "id": 0, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 1, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 2, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 3, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 4, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 5, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 6, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 7, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 8, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 9, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 10, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 11, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 12, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 13, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 14, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 15, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 16, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 17, 00:30:52.045 "state": "FREE", 00:30:52.045 "validity": 0.0 00:30:52.045 } 00:30:52.045 ], 00:30:52.045 "read-only": true 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": "cache_device", 00:30:52.045 "type": "bdev", 00:30:52.045 "chunks": [ 00:30:52.045 { 00:30:52.045 "id": 0, 00:30:52.045 "state": "INACTIVE", 00:30:52.045 "utilization": 0.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 1, 00:30:52.045 "state": "CLOSED", 00:30:52.045 "utilization": 1.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 2, 00:30:52.045 "state": "CLOSED", 00:30:52.045 "utilization": 1.0 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 3, 00:30:52.045 "state": "OPEN", 00:30:52.045 "utilization": 0.001953125 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "id": 4, 00:30:52.045 "state": "OPEN", 00:30:52.045 "utilization": 0.0 00:30:52.045 } 00:30:52.045 ], 00:30:52.045 "read-only": true 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": "verbose_mode", 00:30:52.045 "value": true, 00:30:52.045 "unit": "", 00:30:52.045 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": "prep_upgrade_on_shutdown", 00:30:52.045 "value": false, 00:30:52.045 "unit": "", 00:30:52.045 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:52.045 } 00:30:52.045 ] 00:30:52.045 } 00:30:52.045 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:52.304 [2024-11-26 19:15:23.374692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.304 [2024-11-26 19:15:23.374769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:52.304 [2024-11-26 19:15:23.374791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:52.305 [2024-11-26 19:15:23.374802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.305 [2024-11-26 19:15:23.374839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.305 [2024-11-26 19:15:23.374856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:52.305 [2024-11-26 19:15:23.374869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:52.305 [2024-11-26 19:15:23.374880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.305 [2024-11-26 19:15:23.374909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.305 [2024-11-26 19:15:23.374923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:52.305 [2024-11-26 19:15:23.374936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:52.305 [2024-11-26 19:15:23.374947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.305 [2024-11-26 19:15:23.375025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.324 ms, result 0 00:30:52.305 true 00:30:52.305 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:52.305 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:52.305 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:52.563 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:52.563 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:52.563 19:15:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:53.130 [2024-11-26 19:15:24.043566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.130 [2024-11-26 19:15:24.043636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:53.130 [2024-11-26 19:15:24.043657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:53.130 [2024-11-26 19:15:24.043669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.130 [2024-11-26 19:15:24.043709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.130 [2024-11-26 19:15:24.043726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:53.130 [2024-11-26 19:15:24.043739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:53.130 [2024-11-26 19:15:24.043751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.130 [2024-11-26 19:15:24.043791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.130 [2024-11-26 19:15:24.043815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:53.130 [2024-11-26 19:15:24.043836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:53.130 [2024-11-26 19:15:24.043857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.130 [2024-11-26 19:15:24.043965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.368 ms, result 0 00:30:53.130 true 00:30:53.130 19:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.388 { 00:30:53.388 "name": "ftl", 00:30:53.388 "properties": [ 00:30:53.388 { 00:30:53.388 "name": "superblock_version", 00:30:53.388 "value": 5, 00:30:53.388 "read-only": true 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "name": "base_device", 00:30:53.388 "bands": [ 00:30:53.388 { 00:30:53.388 "id": 0, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "id": 1, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "id": 2, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "id": 3, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "id": 4, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "id": 5, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "id": 6, 00:30:53.388 "state": "FREE", 00:30:53.388 "validity": 0.0 00:30:53.388 }, 00:30:53.389 { 00:30:53.389 "id": 7, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 8, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 9, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 10, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 11, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 12, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 13, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 14, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 15, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 16, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 17, 00:30:53.389 "state": "FREE", 00:30:53.389 "validity": 0.0 00:30:53.389 } 00:30:53.389 ], 00:30:53.389 "read-only": true 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "name": "cache_device", 00:30:53.389 "type": "bdev", 00:30:53.389 "chunks": [ 00:30:53.389 { 00:30:53.389 "id": 0, 00:30:53.389 "state": "INACTIVE", 00:30:53.389 "utilization": 0.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 1, 00:30:53.389 "state": "CLOSED", 00:30:53.389 "utilization": 1.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 2, 00:30:53.389 "state": "CLOSED", 00:30:53.389 "utilization": 1.0 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 3, 00:30:53.389 "state": "OPEN", 00:30:53.389 "utilization": 0.001953125 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "id": 4, 00:30:53.389 "state": "OPEN", 00:30:53.389 "utilization": 0.0 00:30:53.389 } 00:30:53.389 ], 00:30:53.389 "read-only": true 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "name": "verbose_mode", 00:30:53.389 "value": true, 00:30:53.389 "unit": "", 00:30:53.389 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:53.389 }, 00:30:53.389 { 00:30:53.389 "name": "prep_upgrade_on_shutdown", 00:30:53.389 "value": true, 00:30:53.389 "unit": "", 00:30:53.389 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:53.389 } 00:30:53.389 ] 00:30:53.389 } 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83445 ]] 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83445 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83445 ']' 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83445 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83445 00:30:53.389 killing process with pid 83445 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83445' 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83445 00:30:53.389 19:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83445 00:30:54.324 [2024-11-26 19:15:25.393614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:54.324 [2024-11-26 19:15:25.410760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.324 [2024-11-26 19:15:25.410842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:54.324 [2024-11-26 19:15:25.410864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:54.324 [2024-11-26 19:15:25.410877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.324 [2024-11-26 19:15:25.410911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:54.324 [2024-11-26 19:15:25.414328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.324 [2024-11-26 19:15:25.414368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:54.324 [2024-11-26 19:15:25.414389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.391 ms 00:30:54.325 [2024-11-26 19:15:25.414408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.574231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.574317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:04.305 [2024-11-26 19:15:34.574347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9159.840 ms 00:31:04.305 [2024-11-26 19:15:34.574362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.575694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.575735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:04.305 [2024-11-26 19:15:34.575752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.305 ms 00:31:04.305 [2024-11-26 19:15:34.575764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.577037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.577074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:04.305 [2024-11-26 19:15:34.577089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:31:04.305 [2024-11-26 19:15:34.577110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.590160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.590253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:04.305 [2024-11-26 19:15:34.590274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.969 ms 00:31:04.305 [2024-11-26 19:15:34.590286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.598296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.598371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:04.305 [2024-11-26 19:15:34.598390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.947 ms 00:31:04.305 [2024-11-26 19:15:34.598402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.598557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.598603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:04.305 [2024-11-26 19:15:34.598617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:31:04.305 [2024-11-26 19:15:34.598635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.611562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.611647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:04.305 [2024-11-26 19:15:34.611668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.896 ms 00:31:04.305 [2024-11-26 19:15:34.611681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.625115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.625208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:04.305 [2024-11-26 19:15:34.625228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.366 ms 00:31:04.305 [2024-11-26 19:15:34.625241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.638355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.638439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:04.305 [2024-11-26 19:15:34.638459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.042 ms 00:31:04.305 [2024-11-26 19:15:34.638470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.651361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.305 [2024-11-26 19:15:34.651444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:04.305 [2024-11-26 19:15:34.651464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.775 ms 00:31:04.305 [2024-11-26 19:15:34.651477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.305 [2024-11-26 19:15:34.651535] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:04.305 [2024-11-26 19:15:34.651598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:04.305 [2024-11-26 19:15:34.651614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:04.305 [2024-11-26 19:15:34.651630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:04.305 [2024-11-26 19:15:34.651642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:04.306 [2024-11-26 19:15:34.651825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:04.306 [2024-11-26 19:15:34.651837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d9f2c854-23dc-4e09-ae2b-41d664d21ee8 00:31:04.306 [2024-11-26 19:15:34.651849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:04.306 [2024-11-26 19:15:34.651860] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:04.306 [2024-11-26 19:15:34.651871] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:04.306 [2024-11-26 19:15:34.651883] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:04.306 [2024-11-26 19:15:34.651901] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:04.306 [2024-11-26 19:15:34.651913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:04.306 [2024-11-26 19:15:34.651940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:04.306 [2024-11-26 19:15:34.651958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:04.306 [2024-11-26 19:15:34.651976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:04.306 [2024-11-26 19:15:34.651995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.306 [2024-11-26 19:15:34.652022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:04.306 [2024-11-26 19:15:34.652035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:31:04.306 [2024-11-26 19:15:34.652046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.669331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.306 [2024-11-26 19:15:34.669406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:04.306 [2024-11-26 19:15:34.669441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.247 ms 00:31:04.306 [2024-11-26 19:15:34.669468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.669962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.306 [2024-11-26 19:15:34.669989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:04.306 [2024-11-26 19:15:34.670004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.410 ms 00:31:04.306 [2024-11-26 19:15:34.670016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.726012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.726104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:04.306 [2024-11-26 19:15:34.726125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.726138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.726215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.726232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:04.306 [2024-11-26 19:15:34.726256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.726268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.726430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.726452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:04.306 [2024-11-26 19:15:34.726472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.726484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.726510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.726524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:04.306 [2024-11-26 19:15:34.726536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.726548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.831967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.832053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:04.306 [2024-11-26 19:15:34.832089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.832102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.919060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.919146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:04.306 [2024-11-26 19:15:34.919167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.919203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.919355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.919375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:04.306 [2024-11-26 19:15:34.919388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.919405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.919468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.919486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:04.306 [2024-11-26 19:15:34.919499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.919510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.919647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.919677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:04.306 [2024-11-26 19:15:34.919691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.919703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.919765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.919782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:04.306 [2024-11-26 19:15:34.919795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.919806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.919878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.919908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:04.306 [2024-11-26 19:15:34.919942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.919962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.920047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:04.306 [2024-11-26 19:15:34.920074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:04.306 [2024-11-26 19:15:34.920094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:04.306 [2024-11-26 19:15:34.920106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.306 [2024-11-26 19:15:34.920291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9509.560 ms, result 0 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:07.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84042 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84042 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84042 ']' 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.593 19:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:07.593 [2024-11-26 19:15:38.573792] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:07.593 [2024-11-26 19:15:38.573949] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84042 ] 00:31:07.593 [2024-11-26 19:15:38.753071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.851 [2024-11-26 19:15:38.856420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.859 [2024-11-26 19:15:39.717029] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:08.859 [2024-11-26 19:15:39.717123] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:08.859 [2024-11-26 19:15:39.875083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.859 [2024-11-26 19:15:39.875211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:08.859 [2024-11-26 19:15:39.875248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:08.859 [2024-11-26 19:15:39.875286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.859 [2024-11-26 19:15:39.875444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.859 [2024-11-26 19:15:39.875477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:08.860 [2024-11-26 19:15:39.875500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:31:08.860 [2024-11-26 19:15:39.875520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.875576] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:08.860 [2024-11-26 19:15:39.877263] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:08.860 [2024-11-26 19:15:39.877327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.877351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:08.860 [2024-11-26 19:15:39.877375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.762 ms 00:31:08.860 [2024-11-26 19:15:39.877394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.878976] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:08.860 [2024-11-26 19:15:39.901751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.901862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:08.860 [2024-11-26 19:15:39.901887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.773 ms 00:31:08.860 [2024-11-26 19:15:39.901902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.902088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.902121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:08.860 [2024-11-26 19:15:39.902138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:31:08.860 [2024-11-26 19:15:39.902152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.907479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.907558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:08.860 [2024-11-26 19:15:39.907580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.130 ms 00:31:08.860 [2024-11-26 19:15:39.907595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.907763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.907790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:08.860 [2024-11-26 19:15:39.907807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.105 ms 00:31:08.860 [2024-11-26 19:15:39.907821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.907919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.907963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:08.860 [2024-11-26 19:15:39.907986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:08.860 [2024-11-26 19:15:39.907999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.908047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:08.860 [2024-11-26 19:15:39.913255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.913310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:08.860 [2024-11-26 19:15:39.913337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.219 ms 00:31:08.860 [2024-11-26 19:15:39.913351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.913414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.913434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:08.860 [2024-11-26 19:15:39.913450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:08.860 [2024-11-26 19:15:39.913464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.913574] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:08.860 [2024-11-26 19:15:39.913656] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:08.860 [2024-11-26 19:15:39.913710] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:08.860 [2024-11-26 19:15:39.913735] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:08.860 [2024-11-26 19:15:39.913882] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:08.860 [2024-11-26 19:15:39.913914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:08.860 [2024-11-26 19:15:39.913934] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:08.860 [2024-11-26 19:15:39.913952] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:08.860 [2024-11-26 19:15:39.913977] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:08.860 [2024-11-26 19:15:39.913993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:08.860 [2024-11-26 19:15:39.914006] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:08.860 [2024-11-26 19:15:39.914019] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:08.860 [2024-11-26 19:15:39.914032] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:08.860 [2024-11-26 19:15:39.914047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.914061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:08.860 [2024-11-26 19:15:39.914075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:31:08.860 [2024-11-26 19:15:39.914088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.914232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.860 [2024-11-26 19:15:39.914254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:08.860 [2024-11-26 19:15:39.914275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.105 ms 00:31:08.860 [2024-11-26 19:15:39.914289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.860 [2024-11-26 19:15:39.914433] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:08.860 [2024-11-26 19:15:39.914453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:08.860 [2024-11-26 19:15:39.914469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:08.860 [2024-11-26 19:15:39.914483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:08.860 [2024-11-26 19:15:39.914511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:08.860 [2024-11-26 19:15:39.914539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:08.860 [2024-11-26 19:15:39.914552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:08.860 [2024-11-26 19:15:39.914564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:08.860 [2024-11-26 19:15:39.914590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:08.860 [2024-11-26 19:15:39.914603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:08.860 [2024-11-26 19:15:39.914629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:08.860 [2024-11-26 19:15:39.914641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:08.860 [2024-11-26 19:15:39.914667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:08.860 [2024-11-26 19:15:39.914679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:08.860 [2024-11-26 19:15:39.914705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:08.860 [2024-11-26 19:15:39.914717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.860 [2024-11-26 19:15:39.914730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:08.860 [2024-11-26 19:15:39.914764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:08.860 [2024-11-26 19:15:39.914777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.860 [2024-11-26 19:15:39.914790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:08.860 [2024-11-26 19:15:39.914803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:08.860 [2024-11-26 19:15:39.914815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.860 [2024-11-26 19:15:39.914828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:08.860 [2024-11-26 19:15:39.914841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:08.860 [2024-11-26 19:15:39.914853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.860 [2024-11-26 19:15:39.914866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:08.860 [2024-11-26 19:15:39.914879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:08.860 [2024-11-26 19:15:39.914892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:08.860 [2024-11-26 19:15:39.914917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:08.860 [2024-11-26 19:15:39.914930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:08.860 [2024-11-26 19:15:39.914957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.860 [2024-11-26 19:15:39.914983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:08.860 [2024-11-26 19:15:39.914996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:08.860 [2024-11-26 19:15:39.915008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.861 [2024-11-26 19:15:39.915021] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:08.861 [2024-11-26 19:15:39.915035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:08.861 [2024-11-26 19:15:39.915048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:08.861 [2024-11-26 19:15:39.915069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.861 [2024-11-26 19:15:39.915083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:08.861 [2024-11-26 19:15:39.915097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:08.861 [2024-11-26 19:15:39.915109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:08.861 [2024-11-26 19:15:39.915123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:08.861 [2024-11-26 19:15:39.915135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:08.861 [2024-11-26 19:15:39.915148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:08.861 [2024-11-26 19:15:39.915162] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:08.861 [2024-11-26 19:15:39.915197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:08.861 [2024-11-26 19:15:39.915228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:08.861 [2024-11-26 19:15:39.915268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:08.861 [2024-11-26 19:15:39.915282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:08.861 [2024-11-26 19:15:39.915295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:08.861 [2024-11-26 19:15:39.915309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:08.861 [2024-11-26 19:15:39.915406] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:08.861 [2024-11-26 19:15:39.915428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:08.861 [2024-11-26 19:15:39.915458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:08.861 [2024-11-26 19:15:39.915471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:08.861 [2024-11-26 19:15:39.915485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:08.861 [2024-11-26 19:15:39.915501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.861 [2024-11-26 19:15:39.915514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:08.861 [2024-11-26 19:15:39.915529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.154 ms 00:31:08.861 [2024-11-26 19:15:39.915542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.861 [2024-11-26 19:15:39.915616] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:08.861 [2024-11-26 19:15:39.915640] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:12.140 [2024-11-26 19:15:42.765867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.765951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:12.140 [2024-11-26 19:15:42.765974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2850.265 ms 00:31:12.140 [2024-11-26 19:15:42.765986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.799008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.799087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:12.140 [2024-11-26 19:15:42.799108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.679 ms 00:31:12.140 [2024-11-26 19:15:42.799121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.799296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.799320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:12.140 [2024-11-26 19:15:42.799333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:12.140 [2024-11-26 19:15:42.799345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.840252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.840327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:12.140 [2024-11-26 19:15:42.840354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.819 ms 00:31:12.140 [2024-11-26 19:15:42.840366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.840458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.840486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:12.140 [2024-11-26 19:15:42.840515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:12.140 [2024-11-26 19:15:42.840537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.841060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.841092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:12.140 [2024-11-26 19:15:42.841131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:31:12.140 [2024-11-26 19:15:42.841160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.841290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.841321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:12.140 [2024-11-26 19:15:42.841341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:31:12.140 [2024-11-26 19:15:42.841359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.859809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.860120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:12.140 [2024-11-26 19:15:42.860152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.401 ms 00:31:12.140 [2024-11-26 19:15:42.860165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.887088] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:12.140 [2024-11-26 19:15:42.887196] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:12.140 [2024-11-26 19:15:42.887224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.887237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:12.140 [2024-11-26 19:15:42.887253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.821 ms 00:31:12.140 [2024-11-26 19:15:42.887264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.906211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.906289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:12.140 [2024-11-26 19:15:42.906311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.841 ms 00:31:12.140 [2024-11-26 19:15:42.906323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.922454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.922533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:12.140 [2024-11-26 19:15:42.922555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.020 ms 00:31:12.140 [2024-11-26 19:15:42.922567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.938767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.938870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:12.140 [2024-11-26 19:15:42.938892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.108 ms 00:31:12.140 [2024-11-26 19:15:42.938904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:42.939834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:42.939865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:12.140 [2024-11-26 19:15:42.939880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.707 ms 00:31:12.140 [2024-11-26 19:15:42.939891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:43.018336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:43.018420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:12.140 [2024-11-26 19:15:43.018443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.413 ms 00:31:12.140 [2024-11-26 19:15:43.018456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:43.031570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:12.140 [2024-11-26 19:15:43.032527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:43.032558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:12.140 [2024-11-26 19:15:43.032578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.974 ms 00:31:12.140 [2024-11-26 19:15:43.032589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:43.032756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:43.032782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:12.140 [2024-11-26 19:15:43.032795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:12.140 [2024-11-26 19:15:43.032807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:43.032893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:43.032912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:12.140 [2024-11-26 19:15:43.032925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:12.140 [2024-11-26 19:15:43.032936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:43.032971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:43.032986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:12.140 [2024-11-26 19:15:43.033003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:12.140 [2024-11-26 19:15:43.033014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.140 [2024-11-26 19:15:43.033054] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:12.140 [2024-11-26 19:15:43.033071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.140 [2024-11-26 19:15:43.033082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:12.141 [2024-11-26 19:15:43.033094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:12.141 [2024-11-26 19:15:43.033105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.141 [2024-11-26 19:15:43.065380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.141 [2024-11-26 19:15:43.065700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:12.141 [2024-11-26 19:15:43.065733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.238 ms 00:31:12.141 [2024-11-26 19:15:43.065747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.141 [2024-11-26 19:15:43.065882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.141 [2024-11-26 19:15:43.065902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:12.141 [2024-11-26 19:15:43.065916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:31:12.141 [2024-11-26 19:15:43.065927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.141 [2024-11-26 19:15:43.067371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3191.788 ms, result 0 00:31:12.141 [2024-11-26 19:15:43.082162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.141 [2024-11-26 19:15:43.098205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:12.141 [2024-11-26 19:15:43.107359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:12.141 19:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.141 19:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:12.141 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:12.141 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:12.141 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:12.398 [2024-11-26 19:15:43.539671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.398 [2024-11-26 19:15:43.539743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:12.398 [2024-11-26 19:15:43.539773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:12.398 [2024-11-26 19:15:43.539786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.398 [2024-11-26 19:15:43.539825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.398 [2024-11-26 19:15:43.539841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:12.398 [2024-11-26 19:15:43.539854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:12.398 [2024-11-26 19:15:43.539866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.398 [2024-11-26 19:15:43.539894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.398 [2024-11-26 19:15:43.539908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:12.398 [2024-11-26 19:15:43.539921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.398 [2024-11-26 19:15:43.539932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.398 [2024-11-26 19:15:43.540029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.353 ms, result 0 00:31:12.398 true 00:31:12.398 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.656 { 00:31:12.656 "name": "ftl", 00:31:12.656 "properties": [ 00:31:12.656 { 00:31:12.656 "name": "superblock_version", 00:31:12.656 "value": 5, 00:31:12.656 "read-only": true 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "name": "base_device", 00:31:12.656 "bands": [ 00:31:12.656 { 00:31:12.656 "id": 0, 00:31:12.656 "state": "CLOSED", 00:31:12.656 "validity": 1.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 1, 00:31:12.656 "state": "CLOSED", 00:31:12.656 "validity": 1.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 2, 00:31:12.656 "state": "CLOSED", 00:31:12.656 "validity": 0.007843137254901933 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 3, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 4, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 5, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 6, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 7, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 8, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 9, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 10, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 11, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 12, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 13, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 14, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 15, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 16, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 17, 00:31:12.656 "state": "FREE", 00:31:12.656 "validity": 0.0 00:31:12.656 } 00:31:12.656 ], 00:31:12.656 "read-only": true 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "name": "cache_device", 00:31:12.656 "type": "bdev", 00:31:12.656 "chunks": [ 00:31:12.656 { 00:31:12.656 "id": 0, 00:31:12.656 "state": "INACTIVE", 00:31:12.656 "utilization": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 1, 00:31:12.656 "state": "OPEN", 00:31:12.656 "utilization": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 2, 00:31:12.656 "state": "OPEN", 00:31:12.656 "utilization": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 3, 00:31:12.656 "state": "FREE", 00:31:12.656 "utilization": 0.0 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "id": 4, 00:31:12.656 "state": "FREE", 00:31:12.656 "utilization": 0.0 00:31:12.656 } 00:31:12.656 ], 00:31:12.656 "read-only": true 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "name": "verbose_mode", 00:31:12.656 "value": true, 00:31:12.656 "unit": "", 00:31:12.656 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:12.656 }, 00:31:12.656 { 00:31:12.656 "name": "prep_upgrade_on_shutdown", 00:31:12.656 "value": false, 00:31:12.656 "unit": "", 00:31:12.656 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:12.656 } 00:31:12.656 ] 00:31:12.656 } 00:31:12.914 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:12.914 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.914 19:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:13.171 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:13.171 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:13.171 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:13.171 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:13.171 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:13.430 Validate MD5 checksum, iteration 1 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:13.430 19:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:13.430 [2024-11-26 19:15:44.584303] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:13.430 [2024-11-26 19:15:44.584616] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84121 ] 00:31:13.688 [2024-11-26 19:15:44.779894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.946 [2024-11-26 19:15:44.925587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.423  [2024-11-26T19:15:47.573Z] Copying: 481/1024 [MB] (481 MBps) [2024-11-26T19:15:47.831Z] Copying: 924/1024 [MB] (443 MBps) [2024-11-26T19:15:50.360Z] Copying: 1024/1024 [MB] (average 459 MBps) 00:31:19.145 00:31:19.145 19:15:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:19.145 19:15:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=521be64f67e94be588593da7abfa51cb 00:31:21.047 Validate MD5 checksum, iteration 2 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 521be64f67e94be588593da7abfa51cb != \5\2\1\b\e\6\4\f\6\7\e\9\4\b\e\5\8\8\5\9\3\d\a\7\a\b\f\a\5\1\c\b ]] 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:21.047 19:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:21.047 [2024-11-26 19:15:52.130213] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:21.047 [2024-11-26 19:15:52.130593] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84195 ] 00:31:21.305 [2024-11-26 19:15:52.308737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.305 [2024-11-26 19:15:52.433567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.205  [2024-11-26T19:15:55.355Z] Copying: 485/1024 [MB] (485 MBps) [2024-11-26T19:15:55.355Z] Copying: 937/1024 [MB] (452 MBps) [2024-11-26T19:15:56.731Z] Copying: 1024/1024 [MB] (average 465 MBps) 00:31:25.516 00:31:25.516 19:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:25.516 19:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=219bb608bbee4c9e88cfe815e53572b9 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 219bb608bbee4c9e88cfe815e53572b9 != \2\1\9\b\b\6\0\8\b\b\e\e\4\c\9\e\8\8\c\f\e\8\1\5\e\5\3\5\7\2\b\9 ]] 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84042 ]] 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84042 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84275 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84275 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84275 ']' 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.418 19:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:27.676 [2024-11-26 19:15:58.705162] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:27.676 [2024-11-26 19:15:58.705537] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84275 ] 00:31:27.676 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84042 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:27.676 [2024-11-26 19:15:58.878202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.935 [2024-11-26 19:15:58.982434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.869 [2024-11-26 19:15:59.842387] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:28.869 [2024-11-26 19:15:59.842717] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:28.869 [2024-11-26 19:15:59.991566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.869 [2024-11-26 19:15:59.991850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:28.869 [2024-11-26 19:15:59.991997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:28.869 [2024-11-26 19:15:59.992052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.869 [2024-11-26 19:15:59.992223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.869 [2024-11-26 19:15:59.992286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:28.869 [2024-11-26 19:15:59.992329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:31:28.869 [2024-11-26 19:15:59.992439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.869 [2024-11-26 19:15:59.992597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:28.869 [2024-11-26 19:15:59.993805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:28.869 [2024-11-26 19:15:59.993983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.869 [2024-11-26 19:15:59.994097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:28.869 [2024-11-26 19:15:59.994122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.396 ms 00:31:28.869 [2024-11-26 19:15:59.994134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.869 [2024-11-26 19:15:59.994686] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:28.869 [2024-11-26 19:16:00.015876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.869 [2024-11-26 19:16:00.016217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:28.870 [2024-11-26 19:16:00.016250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.185 ms 00:31:28.870 [2024-11-26 19:16:00.016264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.029005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.029094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:28.870 [2024-11-26 19:16:00.029115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:28.870 [2024-11-26 19:16:00.029126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.029733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.029763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:28.870 [2024-11-26 19:16:00.029778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:31:28.870 [2024-11-26 19:16:00.029789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.029877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.029898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:28.870 [2024-11-26 19:16:00.029910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:28.870 [2024-11-26 19:16:00.029922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.029966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.029983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:28.870 [2024-11-26 19:16:00.029995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:28.870 [2024-11-26 19:16:00.030006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.030043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:28.870 [2024-11-26 19:16:00.034368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.034420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:28.870 [2024-11-26 19:16:00.034438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.334 ms 00:31:28.870 [2024-11-26 19:16:00.034455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.034503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.034519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:28.870 [2024-11-26 19:16:00.034531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:28.870 [2024-11-26 19:16:00.034542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.034614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:28.870 [2024-11-26 19:16:00.034648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:28.870 [2024-11-26 19:16:00.034693] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:28.870 [2024-11-26 19:16:00.034717] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:28.870 [2024-11-26 19:16:00.034832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:28.870 [2024-11-26 19:16:00.034848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:28.870 [2024-11-26 19:16:00.034863] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:28.870 [2024-11-26 19:16:00.034878] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:28.870 [2024-11-26 19:16:00.034892] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:28.870 [2024-11-26 19:16:00.034904] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:28.870 [2024-11-26 19:16:00.034915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:28.870 [2024-11-26 19:16:00.034927] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:28.870 [2024-11-26 19:16:00.034938] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:28.870 [2024-11-26 19:16:00.034954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.034967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:28.870 [2024-11-26 19:16:00.034978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:31:28.870 [2024-11-26 19:16:00.034989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.035092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.870 [2024-11-26 19:16:00.035108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:28.870 [2024-11-26 19:16:00.035120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:31:28.870 [2024-11-26 19:16:00.035131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.870 [2024-11-26 19:16:00.035280] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:28.870 [2024-11-26 19:16:00.035307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:28.870 [2024-11-26 19:16:00.035319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:28.870 [2024-11-26 19:16:00.035354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:28.870 [2024-11-26 19:16:00.035375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:28.870 [2024-11-26 19:16:00.035388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:28.870 [2024-11-26 19:16:00.035398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:28.870 [2024-11-26 19:16:00.035418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:28.870 [2024-11-26 19:16:00.035428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:28.870 [2024-11-26 19:16:00.035450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:28.870 [2024-11-26 19:16:00.035461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:28.870 [2024-11-26 19:16:00.035481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:28.870 [2024-11-26 19:16:00.035491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:28.870 [2024-11-26 19:16:00.035512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:28.870 [2024-11-26 19:16:00.035539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:28.870 [2024-11-26 19:16:00.035560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:28.870 [2024-11-26 19:16:00.035570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:28.870 [2024-11-26 19:16:00.035590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:28.870 [2024-11-26 19:16:00.035600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:28.870 [2024-11-26 19:16:00.035621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:28.870 [2024-11-26 19:16:00.035631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:28.870 [2024-11-26 19:16:00.035651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:28.870 [2024-11-26 19:16:00.035661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:28.870 [2024-11-26 19:16:00.035681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:28.870 [2024-11-26 19:16:00.035712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:28.870 [2024-11-26 19:16:00.035742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:28.870 [2024-11-26 19:16:00.035752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:28.870 [2024-11-26 19:16:00.035776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:28.870 [2024-11-26 19:16:00.035787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.870 [2024-11-26 19:16:00.035812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:28.870 [2024-11-26 19:16:00.035823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:28.870 [2024-11-26 19:16:00.035833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:28.870 [2024-11-26 19:16:00.035843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:28.870 [2024-11-26 19:16:00.035853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:28.870 [2024-11-26 19:16:00.035864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:28.870 [2024-11-26 19:16:00.035876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:28.870 [2024-11-26 19:16:00.035890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.870 [2024-11-26 19:16:00.035904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:28.870 [2024-11-26 19:16:00.035915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.035926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.035937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:28.871 [2024-11-26 19:16:00.035962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:28.871 [2024-11-26 19:16:00.035975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:28.871 [2024-11-26 19:16:00.035986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:28.871 [2024-11-26 19:16:00.035997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:28.871 [2024-11-26 19:16:00.036076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:28.871 [2024-11-26 19:16:00.036088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:28.871 [2024-11-26 19:16:00.036120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:28.871 [2024-11-26 19:16:00.036131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:28.871 [2024-11-26 19:16:00.036143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:28.871 [2024-11-26 19:16:00.036155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.871 [2024-11-26 19:16:00.036167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:28.871 [2024-11-26 19:16:00.036579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.947 ms 00:31:28.871 [2024-11-26 19:16:00.036624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.871 [2024-11-26 19:16:00.068587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.871 [2024-11-26 19:16:00.068873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:28.871 [2024-11-26 19:16:00.068998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.836 ms 00:31:28.871 [2024-11-26 19:16:00.069050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.871 [2024-11-26 19:16:00.069239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.871 [2024-11-26 19:16:00.069374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:28.871 [2024-11-26 19:16:00.069488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:28.871 [2024-11-26 19:16:00.069539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.110597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.110853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:29.129 [2024-11-26 19:16:00.110975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.851 ms 00:31:29.129 [2024-11-26 19:16:00.111027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.111220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.111280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:29.129 [2024-11-26 19:16:00.111391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:29.129 [2024-11-26 19:16:00.111450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.111691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.111750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:29.129 [2024-11-26 19:16:00.111792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:31:29.129 [2024-11-26 19:16:00.111899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.112026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.112161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:29.129 [2024-11-26 19:16:00.112196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:29.129 [2024-11-26 19:16:00.112219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.130145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.130218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:29.129 [2024-11-26 19:16:00.130239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.885 ms 00:31:29.129 [2024-11-26 19:16:00.130258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.130488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.130521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:29.129 [2024-11-26 19:16:00.130536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:29.129 [2024-11-26 19:16:00.130547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.166631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.166873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:29.129 [2024-11-26 19:16:00.166906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.040 ms 00:31:29.129 [2024-11-26 19:16:00.166920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.180231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.180320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:29.129 [2024-11-26 19:16:00.180340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:31:29.129 [2024-11-26 19:16:00.180352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.256268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.256374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:29.129 [2024-11-26 19:16:00.256397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.789 ms 00:31:29.129 [2024-11-26 19:16:00.256411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.256670] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:29.129 [2024-11-26 19:16:00.256837] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:29.129 [2024-11-26 19:16:00.256986] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:29.129 [2024-11-26 19:16:00.257132] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:29.129 [2024-11-26 19:16:00.257147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.257160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:29.129 [2024-11-26 19:16:00.257196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:31:29.129 [2024-11-26 19:16:00.257211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.257357] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:29.129 [2024-11-26 19:16:00.257380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.257398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:29.129 [2024-11-26 19:16:00.257410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:29.129 [2024-11-26 19:16:00.257421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.278191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.278279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:29.129 [2024-11-26 19:16:00.278302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.712 ms 00:31:29.129 [2024-11-26 19:16:00.278314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.290654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.290742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:29.129 [2024-11-26 19:16:00.290762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:29.129 [2024-11-26 19:16:00.290775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.129 [2024-11-26 19:16:00.290949] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:29.129 [2024-11-26 19:16:00.291097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.129 [2024-11-26 19:16:00.291113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:29.129 [2024-11-26 19:16:00.291127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.151 ms 00:31:29.129 [2024-11-26 19:16:00.291138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.063 [2024-11-26 19:16:00.912490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.063 [2024-11-26 19:16:00.912581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:30.063 [2024-11-26 19:16:00.912613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 620.027 ms 00:31:30.063 [2024-11-26 19:16:00.912638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.063 [2024-11-26 19:16:00.918067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.063 [2024-11-26 19:16:00.918140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:30.063 [2024-11-26 19:16:00.918192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:31:30.063 [2024-11-26 19:16:00.918227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.063 [2024-11-26 19:16:00.918892] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:30.063 [2024-11-26 19:16:00.918953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.063 [2024-11-26 19:16:00.918982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:30.063 [2024-11-26 19:16:00.919007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.653 ms 00:31:30.063 [2024-11-26 19:16:00.919030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.063 [2024-11-26 19:16:00.919100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.063 [2024-11-26 19:16:00.919128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:30.063 [2024-11-26 19:16:00.919149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:30.063 [2024-11-26 19:16:00.919196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.063 [2024-11-26 19:16:00.919285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 628.329 ms, result 0 00:31:30.063 [2024-11-26 19:16:00.919370] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:30.063 [2024-11-26 19:16:00.919481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.063 [2024-11-26 19:16:00.919506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:30.063 [2024-11-26 19:16:00.919526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.114 ms 00:31:30.063 [2024-11-26 19:16:00.919545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.322 [2024-11-26 19:16:01.460224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.322 [2024-11-26 19:16:01.460524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:30.322 [2024-11-26 19:16:01.460580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 539.116 ms 00:31:30.322 [2024-11-26 19:16:01.460594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.322 [2024-11-26 19:16:01.465442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.322 [2024-11-26 19:16:01.465503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:30.322 [2024-11-26 19:16:01.465522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.883 ms 00:31:30.322 [2024-11-26 19:16:01.465534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.322 [2024-11-26 19:16:01.465853] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:30.322 [2024-11-26 19:16:01.465880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.322 [2024-11-26 19:16:01.465893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:30.322 [2024-11-26 19:16:01.465905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:31:30.322 [2024-11-26 19:16:01.465916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.322 [2024-11-26 19:16:01.465960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.322 [2024-11-26 19:16:01.465978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:30.322 [2024-11-26 19:16:01.465991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:30.322 [2024-11-26 19:16:01.466001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.322 [2024-11-26 19:16:01.466054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 546.688 ms, result 0 00:31:30.322 [2024-11-26 19:16:01.466111] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:30.322 [2024-11-26 19:16:01.466128] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:30.322 [2024-11-26 19:16:01.466141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.322 [2024-11-26 19:16:01.466152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:30.322 [2024-11-26 19:16:01.466165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1175.234 ms 00:31:30.322 [2024-11-26 19:16:01.466201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.466246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.466269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:30.323 [2024-11-26 19:16:01.466282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:30.323 [2024-11-26 19:16:01.466292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.480709] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:30.323 [2024-11-26 19:16:01.481024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.481053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:30.323 [2024-11-26 19:16:01.481076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.707 ms 00:31:30.323 [2024-11-26 19:16:01.481092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.482211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.482427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:30.323 [2024-11-26 19:16:01.482464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.888 ms 00:31:30.323 [2024-11-26 19:16:01.482482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.485571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.485752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:30.323 [2024-11-26 19:16:01.485789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.037 ms 00:31:30.323 [2024-11-26 19:16:01.485807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.485939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.485967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:30.323 [2024-11-26 19:16:01.485997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:30.323 [2024-11-26 19:16:01.486014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.486216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.486242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:30.323 [2024-11-26 19:16:01.486261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:30.323 [2024-11-26 19:16:01.486278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.486329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.486351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:30.323 [2024-11-26 19:16:01.486370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:30.323 [2024-11-26 19:16:01.486389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.486455] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:30.323 [2024-11-26 19:16:01.486480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.486498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:30.323 [2024-11-26 19:16:01.486516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:30.323 [2024-11-26 19:16:01.486532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.486620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.323 [2024-11-26 19:16:01.486644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:30.323 [2024-11-26 19:16:01.486661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:30.323 [2024-11-26 19:16:01.486676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.323 [2024-11-26 19:16:01.488222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1495.907 ms, result 0 00:31:30.323 [2024-11-26 19:16:01.502348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.323 [2024-11-26 19:16:01.518585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:30.323 [2024-11-26 19:16:01.530291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:30.581 Validate MD5 checksum, iteration 1 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:30.581 19:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:30.581 [2024-11-26 19:16:01.653238] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:30.582 [2024-11-26 19:16:01.653599] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84312 ] 00:31:30.840 [2024-11-26 19:16:01.827962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.840 [2024-11-26 19:16:01.932207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.758  [2024-11-26T19:16:04.907Z] Copying: 431/1024 [MB] (431 MBps) [2024-11-26T19:16:05.165Z] Copying: 859/1024 [MB] (428 MBps) [2024-11-26T19:16:07.064Z] Copying: 1024/1024 [MB] (average 422 MBps) 00:31:35.849 00:31:35.849 19:16:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:35.849 19:16:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:37.753 Validate MD5 checksum, iteration 2 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=521be64f67e94be588593da7abfa51cb 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 521be64f67e94be588593da7abfa51cb != \5\2\1\b\e\6\4\f\6\7\e\9\4\b\e\5\8\8\5\9\3\d\a\7\a\b\f\a\5\1\c\b ]] 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:37.753 19:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:37.753 [2024-11-26 19:16:08.967403] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:38.012 [2024-11-26 19:16:08.967928] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84393 ] 00:31:38.012 [2024-11-26 19:16:09.161075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.272 [2024-11-26 19:16:09.284262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.185  [2024-11-26T19:16:11.970Z] Copying: 483/1024 [MB] (483 MBps) [2024-11-26T19:16:12.229Z] Copying: 906/1024 [MB] (423 MBps) [2024-11-26T19:16:15.540Z] Copying: 1024/1024 [MB] (average 453 MBps) 00:31:44.325 00:31:44.325 19:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:44.325 19:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=219bb608bbee4c9e88cfe815e53572b9 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 219bb608bbee4c9e88cfe815e53572b9 != \2\1\9\b\b\6\0\8\b\b\e\e\4\c\9\e\8\8\c\f\e\8\1\5\e\5\3\5\7\2\b\9 ]] 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:46.227 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84275 ]] 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84275 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84275 ']' 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84275 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84275 00:31:46.485 killing process with pid 84275 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84275' 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84275 00:31:46.485 19:16:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84275 00:31:47.427 [2024-11-26 19:16:18.475370] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:47.427 [2024-11-26 19:16:18.494715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.427 [2024-11-26 19:16:18.494796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:47.427 [2024-11-26 19:16:18.494818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:47.427 [2024-11-26 19:16:18.494830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.427 [2024-11-26 19:16:18.494865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:47.427 [2024-11-26 19:16:18.498233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.427 [2024-11-26 19:16:18.498278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:47.427 [2024-11-26 19:16:18.498295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.344 ms 00:31:47.427 [2024-11-26 19:16:18.498306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.427 [2024-11-26 19:16:18.498579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.427 [2024-11-26 19:16:18.498603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:47.428 [2024-11-26 19:16:18.498617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:31:47.428 [2024-11-26 19:16:18.498628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.499856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.499898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:47.428 [2024-11-26 19:16:18.499915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:31:47.428 [2024-11-26 19:16:18.499935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.501221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.501389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:47.428 [2024-11-26 19:16:18.501417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:31:47.428 [2024-11-26 19:16:18.501429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.514409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.514514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:47.428 [2024-11-26 19:16:18.514552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.893 ms 00:31:47.428 [2024-11-26 19:16:18.514564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.521380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.521453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:47.428 [2024-11-26 19:16:18.521473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.748 ms 00:31:47.428 [2024-11-26 19:16:18.521484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.521616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.521636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:47.428 [2024-11-26 19:16:18.521650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:31:47.428 [2024-11-26 19:16:18.521674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.534516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.534594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:47.428 [2024-11-26 19:16:18.534613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.814 ms 00:31:47.428 [2024-11-26 19:16:18.534624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.547498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.547577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:47.428 [2024-11-26 19:16:18.547603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.802 ms 00:31:47.428 [2024-11-26 19:16:18.547619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.560708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.560789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:47.428 [2024-11-26 19:16:18.560810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.013 ms 00:31:47.428 [2024-11-26 19:16:18.560821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.573633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.573714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:47.428 [2024-11-26 19:16:18.573734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.675 ms 00:31:47.428 [2024-11-26 19:16:18.573746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.573816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:47.428 [2024-11-26 19:16:18.573843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:47.428 [2024-11-26 19:16:18.573856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:47.428 [2024-11-26 19:16:18.573869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:47.428 [2024-11-26 19:16:18.573881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.573995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.574007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.574019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.574030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.574042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:47.428 [2024-11-26 19:16:18.574056] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:47.428 [2024-11-26 19:16:18.574067] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d9f2c854-23dc-4e09-ae2b-41d664d21ee8 00:31:47.428 [2024-11-26 19:16:18.574079] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:47.428 [2024-11-26 19:16:18.574089] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:47.428 [2024-11-26 19:16:18.574099] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:47.428 [2024-11-26 19:16:18.574110] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:47.428 [2024-11-26 19:16:18.574120] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:47.428 [2024-11-26 19:16:18.574132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:47.428 [2024-11-26 19:16:18.574155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:47.428 [2024-11-26 19:16:18.574165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:47.428 [2024-11-26 19:16:18.574197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:47.428 [2024-11-26 19:16:18.574209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.574220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:47.428 [2024-11-26 19:16:18.574232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:31:47.428 [2024-11-26 19:16:18.574243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.591300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.591595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:47.428 [2024-11-26 19:16:18.591628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.972 ms 00:31:47.428 [2024-11-26 19:16:18.591641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.428 [2024-11-26 19:16:18.592202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.428 [2024-11-26 19:16:18.592222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:47.428 [2024-11-26 19:16:18.592236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.462 ms 00:31:47.428 [2024-11-26 19:16:18.592246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.647606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.647672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:47.706 [2024-11-26 19:16:18.647690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.647710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.647772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.647787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:47.706 [2024-11-26 19:16:18.647799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.647811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.647945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.647965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:47.706 [2024-11-26 19:16:18.647991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.648002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.648034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.648049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:47.706 [2024-11-26 19:16:18.648061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.648072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.756241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.756344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:47.706 [2024-11-26 19:16:18.756372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.756394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.845281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.845369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:47.706 [2024-11-26 19:16:18.845395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.845407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.845551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.845570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:47.706 [2024-11-26 19:16:18.845584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.845595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.845656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.845697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:47.706 [2024-11-26 19:16:18.845710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.845721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.706 [2024-11-26 19:16:18.845881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.706 [2024-11-26 19:16:18.845900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:47.706 [2024-11-26 19:16:18.845913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.706 [2024-11-26 19:16:18.845924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.707 [2024-11-26 19:16:18.845974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.707 [2024-11-26 19:16:18.845991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:47.707 [2024-11-26 19:16:18.846010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.707 [2024-11-26 19:16:18.846021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.707 [2024-11-26 19:16:18.846069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.707 [2024-11-26 19:16:18.846085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:47.707 [2024-11-26 19:16:18.846097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.707 [2024-11-26 19:16:18.846109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.707 [2024-11-26 19:16:18.846163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:47.707 [2024-11-26 19:16:18.846216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:47.707 [2024-11-26 19:16:18.846229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:47.707 [2024-11-26 19:16:18.846247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.707 [2024-11-26 19:16:18.846399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 351.657 ms, result 0 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:49.084 Remove shared memory files 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84042 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:49.084 ************************************ 00:31:49.084 END TEST ftl_upgrade_shutdown 00:31:49.084 ************************************ 00:31:49.084 00:31:49.084 real 1m37.548s 00:31:49.084 user 2m19.966s 00:31:49.084 sys 0m24.183s 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.084 19:16:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:49.084 Process with pid 76946 is not found 00:31:49.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@14 -- # killprocess 76946 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@954 -- # '[' -z 76946 ']' 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@958 -- # kill -0 76946 00:31:49.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76946) - No such process 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76946 is not found' 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84530 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84530 00:31:49.084 19:16:20 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 84530 ']' 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.084 19:16:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:49.084 [2024-11-26 19:16:20.178628] Starting SPDK v25.01-pre git sha1 baa2dd0a5 / DPDK 24.03.0 initialization... 00:31:49.084 [2024-11-26 19:16:20.178788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84530 ] 00:31:49.343 [2024-11-26 19:16:20.354198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.343 [2024-11-26 19:16:20.456620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.277 19:16:21 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.277 19:16:21 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:50.277 19:16:21 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:50.536 nvme0n1 00:31:50.536 19:16:21 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:50.536 19:16:21 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:50.536 19:16:21 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:50.794 19:16:21 ftl -- ftl/common.sh@28 -- # stores=85098767-acf8-41ea-baec-34cf03871f51 00:31:50.794 19:16:21 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:50.794 19:16:21 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85098767-acf8-41ea-baec-34cf03871f51 00:31:51.360 19:16:22 ftl -- ftl/ftl.sh@23 -- # killprocess 84530 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@954 -- # '[' -z 84530 ']' 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@958 -- # kill -0 84530 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@959 -- # uname 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84530 00:31:51.360 killing process with pid 84530 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84530' 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@973 -- # kill 84530 00:31:51.360 19:16:22 ftl -- common/autotest_common.sh@978 -- # wait 84530 00:31:53.262 19:16:24 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:53.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:53.520 Waiting for block devices as requested 00:31:53.520 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:53.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:53.778 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:53.778 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:59.046 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:59.046 Remove shared memory files 00:31:59.046 19:16:30 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:59.046 19:16:30 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:59.046 19:16:30 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:59.046 19:16:30 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:59.046 19:16:30 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:59.046 19:16:30 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:59.046 19:16:30 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:59.046 ************************************ 00:31:59.046 END TEST ftl 00:31:59.046 ************************************ 00:31:59.046 00:31:59.046 real 11m33.219s 00:31:59.046 user 14m41.095s 00:31:59.046 sys 1m37.166s 00:31:59.046 19:16:30 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.046 19:16:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:59.046 19:16:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:59.046 19:16:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:59.046 19:16:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:59.046 19:16:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:59.046 19:16:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:59.046 19:16:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:59.046 19:16:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:59.046 19:16:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:59.046 19:16:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:59.046 19:16:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:59.046 19:16:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.046 19:16:30 -- common/autotest_common.sh@10 -- # set +x 00:31:59.046 19:16:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:59.046 19:16:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:59.046 19:16:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:59.046 19:16:30 -- common/autotest_common.sh@10 -- # set +x 00:32:00.420 INFO: APP EXITING 00:32:00.420 INFO: killing all VMs 00:32:00.420 INFO: killing vhost app 00:32:00.420 INFO: EXIT DONE 00:32:00.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:00.936 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:00.936 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:01.194 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:01.194 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:01.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:02.020 Cleaning 00:32:02.020 Removing: /var/run/dpdk/spdk0/config 00:32:02.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:02.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:02.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:02.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:02.020 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:02.020 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:02.020 Removing: /var/run/dpdk/spdk0 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58044 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58268 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58491 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58601 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58646 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58774 00:32:02.020 Removing: /var/run/dpdk/spdk_pid58792 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59002 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59107 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59214 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59331 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59435 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59480 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59517 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59589 00:32:02.020 Removing: /var/run/dpdk/spdk_pid59681 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60168 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60237 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60311 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60327 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60481 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60497 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60645 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60661 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60736 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60754 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60818 00:32:02.020 Removing: /var/run/dpdk/spdk_pid60836 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61035 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61071 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61155 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61349 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61444 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61486 00:32:02.020 Removing: /var/run/dpdk/spdk_pid61963 00:32:02.020 Removing: /var/run/dpdk/spdk_pid62063 00:32:02.020 Removing: /var/run/dpdk/spdk_pid62176 00:32:02.020 Removing: /var/run/dpdk/spdk_pid62229 00:32:02.020 Removing: /var/run/dpdk/spdk_pid62260 00:32:02.020 Removing: /var/run/dpdk/spdk_pid62343 00:32:02.020 Removing: /var/run/dpdk/spdk_pid62975 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63017 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63537 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63641 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63761 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63814 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63839 00:32:02.020 Removing: /var/run/dpdk/spdk_pid63865 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65734 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65878 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65882 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65894 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65946 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65950 00:32:02.020 Removing: /var/run/dpdk/spdk_pid65962 00:32:02.020 Removing: /var/run/dpdk/spdk_pid66007 00:32:02.020 Removing: /var/run/dpdk/spdk_pid66011 00:32:02.020 Removing: /var/run/dpdk/spdk_pid66023 00:32:02.020 Removing: /var/run/dpdk/spdk_pid66068 00:32:02.020 Removing: /var/run/dpdk/spdk_pid66077 00:32:02.020 Removing: /var/run/dpdk/spdk_pid66089 00:32:02.020 Removing: /var/run/dpdk/spdk_pid67484 00:32:02.020 Removing: /var/run/dpdk/spdk_pid67600 00:32:02.020 Removing: /var/run/dpdk/spdk_pid69024 00:32:02.020 Removing: /var/run/dpdk/spdk_pid70757 00:32:02.020 Removing: /var/run/dpdk/spdk_pid70837 00:32:02.020 Removing: /var/run/dpdk/spdk_pid70912 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71023 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71115 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71222 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71297 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71378 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71488 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71580 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71681 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71761 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71837 00:32:02.020 Removing: /var/run/dpdk/spdk_pid71947 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72039 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72141 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72216 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72296 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72401 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72503 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72604 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72680 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72757 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72837 00:32:02.020 Removing: /var/run/dpdk/spdk_pid72912 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73021 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73112 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73207 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73287 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73361 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73442 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73517 00:32:02.020 Removing: /var/run/dpdk/spdk_pid73620 00:32:02.279 Removing: /var/run/dpdk/spdk_pid73711 00:32:02.279 Removing: /var/run/dpdk/spdk_pid73865 00:32:02.279 Removing: /var/run/dpdk/spdk_pid74150 00:32:02.279 Removing: /var/run/dpdk/spdk_pid74187 00:32:02.279 Removing: /var/run/dpdk/spdk_pid74685 00:32:02.279 Removing: /var/run/dpdk/spdk_pid74865 00:32:02.279 Removing: /var/run/dpdk/spdk_pid74963 00:32:02.279 Removing: /var/run/dpdk/spdk_pid75075 00:32:02.279 Removing: /var/run/dpdk/spdk_pid75129 00:32:02.279 Removing: /var/run/dpdk/spdk_pid75160 00:32:02.279 Removing: /var/run/dpdk/spdk_pid75443 00:32:02.279 Removing: /var/run/dpdk/spdk_pid75515 00:32:02.279 Removing: /var/run/dpdk/spdk_pid75601 00:32:02.279 Removing: /var/run/dpdk/spdk_pid76017 00:32:02.279 Removing: /var/run/dpdk/spdk_pid76166 00:32:02.279 Removing: /var/run/dpdk/spdk_pid76946 00:32:02.279 Removing: /var/run/dpdk/spdk_pid77094 00:32:02.279 Removing: /var/run/dpdk/spdk_pid77288 00:32:02.279 Removing: /var/run/dpdk/spdk_pid77397 00:32:02.279 Removing: /var/run/dpdk/spdk_pid77784 00:32:02.279 Removing: /var/run/dpdk/spdk_pid78071 00:32:02.279 Removing: /var/run/dpdk/spdk_pid78426 00:32:02.279 Removing: /var/run/dpdk/spdk_pid78642 00:32:02.279 Removing: /var/run/dpdk/spdk_pid78769 00:32:02.279 Removing: /var/run/dpdk/spdk_pid78833 00:32:02.279 Removing: /var/run/dpdk/spdk_pid78971 00:32:02.279 Removing: /var/run/dpdk/spdk_pid79002 00:32:02.279 Removing: /var/run/dpdk/spdk_pid79068 00:32:02.280 Removing: /var/run/dpdk/spdk_pid79272 00:32:02.280 Removing: /var/run/dpdk/spdk_pid79526 00:32:02.280 Removing: /var/run/dpdk/spdk_pid79874 00:32:02.280 Removing: /var/run/dpdk/spdk_pid80312 00:32:02.280 Removing: /var/run/dpdk/spdk_pid80688 00:32:02.280 Removing: /var/run/dpdk/spdk_pid81197 00:32:02.280 Removing: /var/run/dpdk/spdk_pid81345 00:32:02.280 Removing: /var/run/dpdk/spdk_pid81446 00:32:02.280 Removing: /var/run/dpdk/spdk_pid82071 00:32:02.280 Removing: /var/run/dpdk/spdk_pid82152 00:32:02.280 Removing: /var/run/dpdk/spdk_pid82554 00:32:02.280 Removing: /var/run/dpdk/spdk_pid82964 00:32:02.280 Removing: /var/run/dpdk/spdk_pid83445 00:32:02.280 Removing: /var/run/dpdk/spdk_pid83568 00:32:02.280 Removing: /var/run/dpdk/spdk_pid83625 00:32:02.280 Removing: /var/run/dpdk/spdk_pid83695 00:32:02.280 Removing: /var/run/dpdk/spdk_pid83758 00:32:02.280 Removing: /var/run/dpdk/spdk_pid83822 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84042 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84121 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84195 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84275 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84312 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84393 00:32:02.280 Removing: /var/run/dpdk/spdk_pid84530 00:32:02.280 Clean 00:32:02.280 19:16:33 -- common/autotest_common.sh@1453 -- # return 0 00:32:02.280 19:16:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:02.280 19:16:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.280 19:16:33 -- common/autotest_common.sh@10 -- # set +x 00:32:02.280 19:16:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:02.280 19:16:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.280 19:16:33 -- common/autotest_common.sh@10 -- # set +x 00:32:02.538 19:16:33 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:02.538 19:16:33 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:02.538 19:16:33 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:02.538 19:16:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:02.538 19:16:33 -- spdk/autotest.sh@398 -- # hostname 00:32:02.538 19:16:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:02.538 geninfo: WARNING: invalid characters removed from testname! 00:32:34.648 19:17:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:36.025 19:17:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:39.313 19:17:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:41.849 19:17:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:44.381 19:17:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:47.668 19:17:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:50.202 19:17:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:50.203 19:17:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:50.203 19:17:21 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:50.203 19:17:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:50.203 19:17:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:50.203 19:17:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:50.203 + [[ -n 5288 ]] 00:32:50.203 + sudo kill 5288 00:32:50.212 [Pipeline] } 00:32:50.228 [Pipeline] // timeout 00:32:50.233 [Pipeline] } 00:32:50.248 [Pipeline] // stage 00:32:50.254 [Pipeline] } 00:32:50.269 [Pipeline] // catchError 00:32:50.279 [Pipeline] stage 00:32:50.281 [Pipeline] { (Stop VM) 00:32:50.293 [Pipeline] sh 00:32:50.573 + vagrant halt 00:32:53.859 ==> default: Halting domain... 00:33:00.428 [Pipeline] sh 00:33:00.707 + vagrant destroy -f 00:33:03.994 ==> default: Removing domain... 00:33:04.574 [Pipeline] sh 00:33:04.855 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:04.863 [Pipeline] } 00:33:04.882 [Pipeline] // stage 00:33:04.888 [Pipeline] } 00:33:04.903 [Pipeline] // dir 00:33:04.912 [Pipeline] } 00:33:04.927 [Pipeline] // wrap 00:33:04.934 [Pipeline] } 00:33:04.947 [Pipeline] // catchError 00:33:04.957 [Pipeline] stage 00:33:04.960 [Pipeline] { (Epilogue) 00:33:04.973 [Pipeline] sh 00:33:05.253 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:11.857 [Pipeline] catchError 00:33:11.858 [Pipeline] { 00:33:11.869 [Pipeline] sh 00:33:12.152 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:12.412 Artifacts sizes are good 00:33:12.421 [Pipeline] } 00:33:12.436 [Pipeline] // catchError 00:33:12.447 [Pipeline] archiveArtifacts 00:33:12.454 Archiving artifacts 00:33:12.574 [Pipeline] cleanWs 00:33:12.586 [WS-CLEANUP] Deleting project workspace... 00:33:12.586 [WS-CLEANUP] Deferred wipeout is used... 00:33:12.593 [WS-CLEANUP] done 00:33:12.595 [Pipeline] } 00:33:12.610 [Pipeline] // stage 00:33:12.615 [Pipeline] } 00:33:12.628 [Pipeline] // node 00:33:12.633 [Pipeline] End of Pipeline 00:33:12.672 Finished: SUCCESS